Pedestrian detection has never been an easy task for computer vision and automotive industry. Systems like the advanced driver assistance system (ADAS) highly rely on far infrared (FIR) data captured to detect pedestrians at nighttime. The recent development of deep learning-based detectors has proven the excellent results of pedestrian detection in perfect weather conditions. However, it is still unknown what is the performance in adverse weather conditions.

Instructions: 

Prefix _b - means benchmark, otherwise used for training/testing

 

Each recording folder contains:

  16BitFrames - 16bit original capture without processing.

  16BitTransformed - 16bit capture with low pass filter applied and scaled to 640x480.

  annotations - annotations and 8bit images made from 16BitTransformed.

  carParams.csv - a CAN details with coresponding frame ID.

  weather.txt - weather information in which the recording was made.

 

Annotations are made in YOLO (You only look once) Darknet format.

 

To have images without low pass filter applied you should make the following steps:

- Take 16bit images from 16BitFrames folder and open with OpenCV function like: Mat input = imread(<image_full_path>, -1);

- Then use convertTo function like: input.convertTo(output, input.depth(), sc, sh), where output is transformed Mat, sc is scale and sh is shift from carParams.csv file.

- Finally, scale image to 640x480 

Categories:
622 Views

Research on damage detection of road surfaces has been an active area of research, but most studies have focused so far on the detection of the presence of damages. However, in real-world scenarios, road managers need to clearly understand the type of damage and its extent in order to take effective action in advance or to allocate the necessary resources. Moreover, currently there are few uniform and openly available road damage datasets, leading to a lack of a common benchmark for road damage detection.

Categories:
1699 Views

The file 'GPS_P2.zip' is the dataset collected from the GNSS sensor of "Xinda" autonomous vehicle in the Connected Autonomous Vehicles Test Fields (the CAVs Test Fields) Weishui Campus,Chang'an University.

The file 'fault.zip' is the simulated faults in the healthy data in '.mat' format, where X_abrupt, X_noise and X_drift represent abrupt faults, noise and drift in the long run are added into the healthy data, respectively.

Categories:
335 Views

Dataset consists of various open GIS data from the Netherlands as Population Cores, Neighbhourhoods, Land Use, Neighbourhoods, Energy Atlas, OpenStreetMaps, openchargemap and charging stations. The data was transformed for buffers with 350m around each charging stations. The response variable is binary popularity of a charging pool.

Instructions: 

Use the first n_RFID variable as a response, the rest as predictors.

Categories:
368 Views

As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on the robustness of data-driven algorithms under diverse challenging conditions where trained models can possibly be depolyed. To achieve this goal, we introduced a large-sacle (~1.72M frames) traffic sign detection video dataset (CURE-TSD) which is among the most comprehensive datasets with controlled synthetic challenging conditions. The video sequences in the 

Instructions: 

The name format of the video files are as follows: “sequenceType_sequenceNumber_challengeSourceType_challengeType_challengeLevel.mp4”

·         sequenceType: 01 – Real data 02 – Unreal data

·         sequenceNumber: A number in between [01 – 49]

·         challengeSourceType: 00 – No challenge source (which means no challenge) 01 – After affect

·         challengeType: 00 – No challenge 01 – Decolorization 02 – Lens blur 03 – Codec error 04 – Darkening 05 – Dirty lens 06 – Exposure 07 – Gaussian blur 08 – Noise 09 – Rain 10 – Shadow 11 – Snow 12 – Haze

·         challengeLevel: A number in between [01-05] where 01 is the least severe and 05 is the most severe challenge.

Test Sequences

We split the video sequences into 70% training set and 30% test set. The sequence numbers corresponding to test set are given below:

[01_04_x_x_x, 01_05_x_x_x, 01_06_x_x_x, 01_07_x_x_x, 01_08_x_x_x, 01_18_x_x_x, 01_19_x_x_x, 01_21_x_x_x, 01_24_x_x_x, 01_26_x_x_x, 01_31_x_x_x, 01_38_x_x_x, 01_39_x_x_x, 01_41_x_x_x, 01_47_x_x_x, 02_02_x_x_x, 02_04_x_x_x, 02_06_x_x_x, 02_09_x_x_x, 02_12_x_x_x, 02_13_x_x_x, 02_16_x_x_x, 02_17_x_x_x, 02_18_x_x_x, 02_20_x_x_x, 02_22_x_x_x, 02_28_x_x_x, 02_31_x_x_x, 02_32_x_x_x, 02_36_x_x_x]

The videos with all other sequence numbers are in the training set. Note that “x” above refers to the variations listed earlier.

The name format of the annotation files are as follows: “sequenceType_sequenceNumber.txt“

Challenge source type, challenge type, and challenge level do not affect the annotations. Therefore, the video sequences that start with the same sequence type and the sequence number have the same annotations.

·         sequenceType: 01 – Real data 02 – Unreal data

·         sequenceNumber: A number in between [01 – 49]

The format of each line in the annotation file (txt) should be: “frameNumber_signType_llx_lly_lrx_lry_ulx_uly_urx_ury”. You can see a visual coordinate system example in our GitHub page.

·         frameNumber: A number in between [001-300]

·         signType: 01 – speed_limit 02 – goods_vehicles 03 – no_overtaking 04 – no_stopping 05 – no_parking 06 – stop 07 – bicycle 08 – hump 09 – no_left 10 – no_right 11 – priority_to 12 – no_entry 13 – yield 14 – parking

Categories:
1385 Views

As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on the robustness of data-driven algorithms under diverse challenging conditions where trained models can possibly be depolyed.

Instructions: 

The name format of the provided images are as follows: "sequenceType_signType_challengeType_challengeLevel_Index.bmp"

  • sequenceType: 01 - Real data 02 - Unreal data

  • signType: 01 - speed_limit 02 - goods_vehicles 03 - no_overtaking 04 - no_stopping 05 - no_parking 06 - stop 07 - bicycle 08 - hump 09 - no_left 10 - no_right 11 - priority_to 12 - no_entry 13 - yield 14 - parking

  • challengeType: 00 - No challenge 01 - Decolorization 02 - Lens blur 03 - Codec error 04 - Darkening 05 - Dirty lens 06 - Exposure 07 - Gaussian blur 08 - Noise 09 - Rain 10 - Shadow 11 - Snow 12 - Haze

  • challengeLevel: A number in between [01-05] where 01 is the least severe and 05 is the most severe challenge.

  • Index: A number shows different instances of traffic signs in the same conditions.

Categories:
1003 Views

The files in this dataset each contain vectors Time, PEDAL, SPEED, ACCEL, VOLTAGE and CURRENT related to an Electric Vehicle travelling on one of four different roads, mostly in urban areas.  Data is obtained from the CAN bus of the vehicle (a Zhidou ZD model ZD2) resampled in order to obtain a single time coordinate and stored in the dataset.

Categories:
962 Views

Vision and lidar are complementary sensors that are incorporated into many applications of intelligent transportation systems. These sensors have been used to great effect in research related to perception, navigation and deep-learning applications. Despite this success, the validation of algorithm robustness has recently been recognised as a major challenge for the massive deployment of these new technologies. It is well known that algorithms and models trained or tested with a particular dataset tend not to generalise well for other scenarios.

Instructions: 

For detailed information about this dataset and the tools, please go to our website: http://its.acfr.usyd.edu.au/datasets/usyd-campus-dataset/

 

Categories:
2326 Views

Normal
0

false
false
false

EN-US
X-NONE
AR-SA

Instructions: 

These tables are presented more details of the proposed methodologies. These tables include problem, methodology, type, data from autonomous vehicle, base case, result, evaluation method, and evolutionary characteristics. The proposed methodologies are categorized by considering the goal. 

Categories:
177 Views

Pages