Skip to main content

autonomous driving

The dataset includes eight urban scenes of different sizes and styles, as well as various lighting and weather conditions. Each scene contains 200 vehicles of different types, 100 pedestrians and 5,000 RGB images, semantic images, and point cloud files. The annotations include both the depth and 2D information of the objects.

Categories:

ThermalTrack is an RGB-LWIR paired dataset of wheel tracks captured under harsh winter conditions, including white-outs (severely degraded visibility), low-contrast snow terrain, and diverse wheel track geometries. Designed to enable robust alternative navigation strategies for winter autonomy systems, this dataset builds upon WADS (https://digitalcommons.mtu.edu/wads/), a specialized dataset for autonomous vehicle research in inclement winter weather.

Categories:

Reinforcement Learning (RL) has shown excellent performance in solving decision-making and control problems of autonomous driving, which is increasingly applied in diverse driving scenarios. However, driving is a multi-attribute problem, leading to challenges in achieving multi-objective compatibility for current RL methods, especially in both policy execution and policy iteration. We propose a Multi-objective Ensemble-Critic reinforcement learning method with Hybrid Parametrized Action for multi-objective compatible autonomous driving.

Categories:

To address common issues in intelligent driving, such as small object missed detection, false detection, and edge segmentation errors, this paper optimizes the YOLOP (You Only Look Once for Panoptic Driving Perception) network and proposes a multi-task perception algorithm based on a MKHA (Multi-Kernel Hybrid Attention) mechanism, named MKHA-YOLOP.

Categories:

This dataset contains simulated and real-world experimental data associated with the paper “Comprehensive Analysis of Optimization-Based Obstacle Avoidance for Agricultural Robotics in Greenhouse Environments.” The dataset from the simulated environment comprises multiple CSV files generated from the Gazebo simulation of a differential robot, the Stretch Robot. These files document the robot's movement, capturing data from the Gazebo model topic.

Categories:

This dataset contains simulated and real-world experimental data associated with the paper “Comprehensive Analysis of Optimization-Based Obstacle Avoidance for Agricultural Robotics in Greenhouse Environments.” The dataset from the simulated environment comprises multiple CSV files generated from the Gazebo simulation of a differential robot, the Stretch Robot. These files document the robot's movement, capturing data from the Gazebo model topic.

Categories:

The TiHAN-V2X Dataset was collected in Hyderabad, India, across various Vehicle-to-Everything (V2X) communication types, including Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), Infrastructure-to-Vehicle (I2V), and Vehicle-to-Cloud (V2C). The dataset offers comprehensive data for evaluating communication performance under different environmental and road conditions, including urban, rural, and highway scenarios.

Categories:

This is the collection of the Ecuadorian Traffic Officer Detection Dataset. This can be used mainly on Traffic Officer detection projects using YOLO. Dataset is in YOLO format. There are 1862 total images in this dataset fully annotated using  Roboflow Labeling tool.  Dataset is split as follow, 1734 images for training, 81 images for validation and 47 images for testing. Dataset is annotated only as one class-Traffic Officer (EMOV). The dataset produced a Mean Average Precision(mAP) of 96.4 % using YOLOv3m, 99.0 % using YOLOv5x  and 98.10 % using YOLOv8x.

Categories:

Ensuring the safe and reliable operation of autonomous vehicles under adverse weather remains a significant challenge. 

To address this, we have developed a comprehensive dataset composed of sensor data acquired in a real test track and reproduced in the laboratory for the same test scenarios.

The provided dataset includes camera, radar, LiDAR, inertial measurement unit (IMU), and GPS data recorded under adverse weather conditions (rainy, night-time, and snowy conditions). 

Categories:

Solving the external perception problem for autonomous vehicles and driver-assistance systems requires accurate and robust driving scene perception in both regularly-occurring driving scenarios (termed “common cases”) and rare outlier driving scenarios (termed “edge cases”). In order to develop and evaluate driving scene perception models at scale, and more importantly, covering potential edge cases from the real world, we take advantage of the MIT-AVT Clustered Driving Scene Dataset and build a subset for the semantic scene segmentation task.

Categories: