Datasets
Standard Dataset
Comprehensively Evaluating the Perception Systems of Autonomous Vehicles Against Hazards
- Citation Author(s):
- Submitted by:
- Jie Bao
- Last updated:
- Wed, 07/17/2024 - 00:36
- DOI:
- 10.21227/q51w-cs63
- License:
- Categories:
- Keywords:
Abstract
Perception systems are vital for the safety of autonomous driving. In complex autonomous driving scenarios, autonomous vehicles must overcome various natural hazards, such as heavy rain or raindrops on the camera lens. Therefore, it is essential to conduct comprehensive testing of the perception systems in autonomous vehicles against these hazards, as demanded by regulatory agency of many countries for human drivers.
Since there are many hazard scenarios, each of which has multiple configurable parameters, the challenges are (1) how do we systematically and adequately test an autonomous vehicle against these hazard scenarios, with measurable outcome; and (2) how do we efficiently explore the huge search space to identify scenarios that would induce failure?
In this work, we propose a framework {Hazards Generation and Testing (HazGT)} to generate a customizable and comprehensive repository of hazard scenarios for evaluating the perception system of autonomous vehicles.
HazGT not only allows us to measure how comprehensively an autonomous vehicle (AV) has been tested against different hazards but also supports the identification of important adversarial hazards through optimization.
HazGT supports a total of 70 kinds of hazards relevant to the visual perception of AVs, which are based on industrial regulation.
HazGT automatically optimizes the parameter values to efficiently achieve different testing objectives based on Genetic Algorithm.
We have implemented HazGT based on two {popular} 3D engines, i.e., Unity and Unreal Engine.
For the two mainstream perception models (i.e., YOLO and Faster RCNN), we have evaluated their performance against each hazard through extensive experiments and the results show that both systems have much room to improve. In addition, our experiments also found that ChatGPT4 performs slightly worse than YOLO. Our optimization-based testing engine is effective in finding perceptual errors on the perception models.
An introduction to the files.
1. Details-of-Hazards.pdf
We have defined a total of 70 hazards, which fall into four categories. This document shows the details of the hazards.
2. 智能网联汽车预期功能安全场景库建设报告.pdf
(Research on the Construction of a Scenario Library for the Safety Of The Intended Functionality(SOTIF) in Connected and Autonomous Vehicles)
This research report studies the scenario framework and architecture for SOTIF testing needs of intelligent driving, aiming to provide a reference for the development of simulation and testing software for SOTIF of intelligent driving, as well as the design of testing procedures and test cases.
Pages 7 to 22 of the document provide a detailed overview of the external hazards that lead to perception limitations, i.e., the details of visual hazards.
3. Data
NOTE1: To save space, we deleted most of the images in the dataset. But we still kept the first sample of each sub-experiment for the convenience of readers to conduct inspections.
NOTE2: Although he test dataset used in the ChatGPT4 experiment is the same as the test dataset used in the random-sampling experiment, we still listed 3 extra samples for comparison to demonstrate the results of the ChatGPT4 experiment.
|--RQ1-UniformSampling: the data obtained from the uniform-sampling experiment in RQ1.
|--Animals Carrying Goods: the folder of uniform-sampling samples for the hazard(Animals Carrying Goods).
|--false_count_all.jpg: the statistical figures of false negatives and false positives for the hazard.
|--false_count_all.json: the statistical information file of false negatives and false positives for the hazard.
|--fps_count.json: the statistical information file of the maximum and the average numbers of false negatives and false positives for the hazard.
|--animalsCarryingObjects_False: the folder of the false-value sample for the hazard(Animals Carrying Goods).
|--false.json: the statistical information file of false negatives and false positives for the sample.
|--Hazards.json: the hazard config file of the sample(used by HazGT Simulator to set the scenario).
|--Source: the images of the scenario.
|--GroundtruthLabels: the ground truth labels of the scenario.
|--Paint: the images labeled with ground truth labels.
|--labels: the labels in the detection results of YOLO.
|--FRCNN: the images of the detection results of Faster R-CNN.
|--flabels: the labels in the detection results of Faster R-CNN.
|----animalsCarryingObjects_True: Folder of the true-value sample for the hazard(Animals Carrying Goods).
|--......
|--Humidity
|--humidity_0: Folder of the 0-value sample for the hazard(Humidity).
|--humidity_10: Folder of the 0.1-value sample for the hazard(Humidity).
|--......
|--humidity_90: Folder of the 0.9-value sample for the hazard(Humidity).
|--humidity_100: Folder of the 1-value sample for the hazard(Humidity).
|--......
|--RQ1-RQ3-RandomSampling: the data obtained from the random-sampling experiment in RQ1 and the random-sampling experiment in RQ3.
|--TrainDataset: the hazard dataset for re-training YOLO.
|--NoHazard: the no-hazard dataset.
|--0: the 0th sample.
|--GroundtruthLabels: the ground truth labels of the scenario.
|--Attributes.json: the hazard config file of the sample(used by HazGT Simulator to set the scenario).
|--Simulation1.jpg: the 1st frame image of the scenario.
|--......
|--Simulation10.jpg: the 10thframe image of the scenario.
|--4999: the 4999th sample.
|--Weather: the weather dataset.
|--Participants: the traffic participant dataset.
|--Road: the road structure dataset.
|--Facilities: the traffic facility dataset.
|--TestDataset: the hazard dataset for testing YOLO.
|--Evaluation.json: the evaluation result of the model.
|--NoHazard: the no-hazard dataset.
|--0: the 0th sample.
|--GroundtruthLabels: the ground truth labels of the scenario.
|--GroundtruthImages: the images labeled with ground truth labels.
|--Attributes.json: the hazard config file of the sample(used by HazGT Simulator to set the scenario).
|--Yolo_trained: the detection results of YOLO.
|--Initial: the detection results of YOLO before re-training.
|-- labels: the labels in the detection results of YOLO.
|-- Simulation1.jpg: the 1st frame image of the scenario.
|--......
|--Simulation10.jpg: the 10thframe image of the scenario.
|--Optimized: the detection results of YOLO after re-training.
|--Simulation1.jpg: the 1st frame image of the scenario.
|--......
|--Simulation10.jpg: the 10thframe image of the scenario.
|--4999: the 4999th sample.
|--Weather: the weather dataset.
|--Participants: the traffic participant dataset.
|--Road: the road structure dataset.
|--Facilities: the traffic facility dataset.
|--RQ1-RandomSampling_ChatGPT4: the data obtained from the ChatGPT4 experiment in RQ1.
|--labels.json: the numbers of the objects detected by ChatGPT4.
|--false.json: Statistical information file of false negatives and false positives.
|--false count.json: the numbers of false negatives and false positives.
|--NoHazard: the no-hazard dataset.
|--0: the 0th sample.
|--GroundtruthLabels: the ground truth labels of the scenario.
|--GroundtruthImages: the images labeled with ground truth labels.
|--Attributes.json: the hazard config file of the sample(used by HazGT Simulator to set the scenario).
|--Gpt: the detection results of ChatGPT4.
|--Simulation1.txt: the 1st label of the scenario.
|--......
|--Simulation10.txt: the 10th label of the scenario.
|--Simulation1.jpg: the 1st frame image of the scenario.
|--......
|--Simulation10.jpg: the 10thframe image of the scenario.
|--4999: the 4999th sample.
|--Weather: the weather dataset.
|--Participants: the traffic participant dataset.
|--Road: the road structure dataset.
|--Facilities: the traffic facility dataset.
|--RQ2: the data obtained from the experiment in RQ2.
|--HazGT: the experimental results of HazGT.
|--weather: the weather experiment.
|--cqss.json: the CQS values of the optimal points for each iteration during the YOLO and FRCNN experiments.
|--false_yolo.json: the false negatives and false positives of the optimal points for each iteration during the YOLO experiment.
|--false_frcnn.json: the false negatives and false positives of the optimal points for each iteration during the FRCNN experiment.
|--Done: the flag used in the experiment.
|--targets: the optimal points for each iteration .
|--0: the optimal point in the 0th iteration .
|--Attributes.json: the hazard config file of the sample(used by HazGT Simulator to set the scenario).
|--GroundtruthLabels: the ground truth labels of the scenario.
|--Yolo: the detection results of YOLO.
|--labels: the labels in the detection results of YOLO.
|--Frcnn: the detection results of Frcnn.
|--labels: the labels in the detection results of Frcnn.
|--images: the statistical chart of experimental results.
|--cache: the scenario samples generated during the experiment.
|--0: the samples in the 0th iteration.
|--0: the 0th sample.
|--......
|--19: the 19th sample.
|--......
|--29: the 29th iteration.
|--participants: the traffic participant experiment.
|--road: the road structure experiment.
|--facilities: the traffic facility experiment.
|--HazGT': the experimental results of HazGT'.
|--HazGTr: the experimental results of HazGTr.
4. Tool
|--Hazard Generator
|--HazGTSimulator: the simulator we used to generate hazardous traffic scenarios. To save space, we only show the unity version.
|--HazGTSimulator.exe: the executable program of the simulator.
|--......: the unity files unrelated to this study.
|--Perception Reward Evaluator
NOTE: the ChatGPT4 experiment is conducted manually.
|--YOLO: the Yolov7 github project.
|--detect.py: the official detection interface in which we make some changes to the codes to facilitate HazGT.
|--detect_hazgt.py: the detection interface HazGT uses.
|--......: the files and folders in yolov7 project.
|--FRCNN: the Faster RCNN github project.
|--predict.py: the detection interface HazGT uses.
|--......: the files and folders in Faster RCNN project.
|--Parameter Optimizer
|--ExperimentN.py: the parameter search experiment and the uniform sampling experiment program.
|--HazGTTypes.py: the classes we defined to analyze the detection results.
|--BridgeUnityWithYolo.py: the interface calling methods we defined to call YOLO and FRCNN.
|--RandomSampling.py: the random sampling experiment program.
|--DrawLabels.py: the methods to draw rectangle on images with given labels.
|--SplitDataset.py: the interface to split dataset for training.
|--Tools.py: some useful methods used in many programs.
Steps to generate a scenario:
-
Double click Tool\HazGTSimulator\HazGTSimulator.exe.
-
Use the toggles or sliders located at the top left of the screen to set the scenario and the images directory.
-
Click the button Start Simulation which is located at the bottom of the screen.