Solving the external perception problem for autonomous vehicles and driver-assistance systems requires accurate and robust driving scene perception in both regularly-occurring driving scenarios (termed “common cases”) and rare outlier driving scenarios (termed “edge cases”). In order to develop and evaluate driving scene perception models at scale, and more importantly, covering potential edge cases from the real world, we take advantage of the MIT-AVT Clustered Driving Scene Dataset and build a subset for the semantic scene segmentation task.

Instructions: 

 

MIT DriveSeg (Semi-auto) Dataset is a set of forward facing frame-by-frame pixel level semantic labeled dataset (coarsely annotated through a novel semiautomatic annotation approach) captured from moving vehicles driving in a range of real world scenarios drawn from MIT Advanced Vehicle Technology (AVT) Consortium data.

 

Technical Summary

Video data - Sixty seven 10 second 720P (1280x720) 30 fps videos (20,100 frames)

Class definitions (12) - vehicle, pedestrian, road, sidewalk, bicycle, motorcycle, building, terrain (horizontal vegetation), vegetation (vertical vegetation), pole, traffic light, and traffic sign

 

Technical Specifications, Open Source Licensing and Citation Information

Ding, L., Glazer, M., Terwilliger, J., Reimer, B. & Fridman, L. (2020). MIT DriveSeg (Semi-auto) Dataset: Large-scale Semi-automated Annotation of Semantic Driving Scenes. Massachusetts Institute of Technology AgeLab Technical Report 2020-2, Cambridge, MA. (pdf)

Ding, L., Terwilliger, J., Sherony, R., Reimer, B. & Fridman, L. (2020). MIT DriveSeg (Manual) Dataset. IEEE Dataport. DOI: 10.21227/nb3n-kk46.

 

Attribution and Contact Information

This work was done in collaboration with the Toyota Collaborative Safety Research Center (CSRC). For more information, click here.

For any questions related to this dataset or requests to remove identifying information please contact driveseg@mit.edu.

 

Categories:
761 Views

Semantic scene segmentation has primarily been addressed by forming representations of single images both with supervised and unsupervised methods. The problem of semantic segmentation in dynamic scenes has begun to recently receive attention with video object segmentation approaches. What is not known is how much extra information the temporal dynamics of the visual scene carries that is complimentary to the information available in the individual frames of the video.

Instructions: 

 

MIT DriveSeg (Manual) Dataset is a forward facing frame-by-frame pixel level semantic labeled dataset captured from a moving vehicle during continuous daylight driving through a crowded city street.

The dataset can be downloaded from the IEEE DataPort or demoed as a video.

 

Technical Summary

Video data - 2 minutes 47 seconds (5,000 frame) 1080P (1920x1080) 30 fps

Class definitions (12) - vehicle, pedestrian, road, sidewalk, bicycle, motorcycle, building, terrain (horizontal vegetation), vegetation (vertical vegetation), pole, traffic light, and traffic sign

 

Technical Specifications, Open Source Licensing and Citation Information

Ding, L., Terwilliger, J., Sherony, R., Reimer, B. & Fridman, L. (2020). MIT DriveSeg (Manual) Dataset for Dynamic Driving Scene Segmentation. Massachusetts Institute of Technology AgeLab Technical Report 2020-1, Cambridge, MA. (pdf)

Ding, L., Terwilliger, J., Sherony, R., Reimer, B. & Fridman, L. (2020). MIT DriveSeg (Manual) Dataset. IEEE Dataport. DOI: 10.21227/mmke-dv03.

 

Related Research

Ding, L., Terwilliger, J., Sherony, R., Reimer. B. & Fridman, L. (2019). Value of Temporal Dynamics Information in Driving Scene Segmentation. arXiv preprint arXiv:1904.00758. (link)

 

Attribution and Contact Information

This work was done in collaboration with the Toyota Collaborative Safety Research Center (CSRC). For more information, click here.

For any questions related to this dataset or requests to remove Identifying information please contact driveseg@mit.edu.

 

Categories:
1294 Views

These datasets include the results from the comparison of different traffic-free path planning strategies presented in the work entitled "A primitive comparison for traffic-free path planning",  Antonio Artuñedo, Jorge Godoy, Jorge Villagra. https://doi.org/10.1109/ACCESS.2018.2839884

Instructions: 

The dataset files are named as follows: p'x'.csv, where 'x' is the percentile used to filter data included in the file. Each file contains data of both considered scenarios.

 

The content of the datasets is organized in set a columns that represent the concrete test cases setup included in each row.

The columns order is:

  1. ID: Test case identifier.
  2. ID_num: Test case number.
  3. Scenario: Scenario number.
  4. RP select method: Referenc points selection method
  5. Primitive: Primitive used in the test case
  6. RP opt. method: Reference points optimization method
  7. RP opt. algorithm: Reference points optimization algorithm
  8. RP cost fcn.: Cost function used in reference points optimization
  9. SP opt. method: Seeding points optimization method
  10. SP opt. algorithm: Seeding points optimization algorithm
  11. SP cost fcn.: Cost function used in seeding points optimization
  12. Init. heading: Initial heading setting
  13. Final heading: Final heading setting
  14. Init. curv.: Initial curvature setting.
  15. Final curv.: Final curvature setting.
  16. K_t (exc. time): time KPI 
  17. K_kmax: Maximum curvature KPI
  18. K_k0: KPI related to curvature along the path
  19. K_k1: KPI related to the first derivative of curvature along the path
  20. K_k2: KPI related to the second derivative of curvature along the path
  21. K_cl: KPI related to centreline offset.

For further details please find the paper in this link: https://doi.org/10.1109/ACCESS.2018.2839884, or contact the corresponding author at antonio.artunedo@car.upm-csic.es

Categories:
Category: 
91 Views