MIT DriveSeg (Semi-auto) Dataset

Citation Author(s):
Li
Ding
MIT
Michael
Glazer
MIT
Jack
Terwilliger
MIT
Bryan
Reimer
MIT
Lex
Fridman
MIT
Submitted by:
Meng Wang
Last updated:
Thu, 06/04/2020 - 12:07
DOI:
10.21227/nb3n-kk46
Data Format:
License:
5
2 ratings - Please login to submit your rating.

Abstract 

Solving the external perception problem for autonomous vehicles and driver-assistance systems requires accurate and robust driving scene perception in both regularly-occurring driving scenarios (termed “common cases”) and rare outlier driving scenarios (termed “edge cases”). In order to develop and evaluate driving scene perception models at scale, and more importantly, covering potential edge cases from the real world, we take advantage of the MIT-AVT Clustered Driving Scene Dataset and build a subset for the semantic scene segmentation task. We hereby present the MIT DriveSeg (Semi-auto) Dataset: a large-scale video driving scene dataset, which contains 20,100 video frames with pixel-wise semantic annotation. We propose semi-automatic annotation approaches leveraging both manual and computational efforts to annotate the data more efficiently and at lower cost than manual annotation.

 

Comments

/

Submitted by Divya Dhaipullay on Mon, 03/13/2023 - 14:47

Dataset Files

LOGIN TO ACCESS DATASET FILES
Open Access dataset files are accessible to all logged in  users. Don't have a login?  Create a free IEEE account.  IEEE Membership is not required.

Documentation

AttachmentSize
File MIT_DriveSeg_Semiauto.pdf2.73 MB