Datasets
Open Access
MIT DriveSeg (Semi-auto) Dataset
- Citation Author(s):
- Submitted by:
- Meng Wang
- Last updated:
- Thu, 06/04/2020 - 12:07
- DOI:
- 10.21227/nb3n-kk46
- Data Format:
- License:
- Categories:
- Keywords:
Abstract
Solving the external perception problem for autonomous vehicles and driver-assistance systems requires accurate and robust driving scene perception in both regularly-occurring driving scenarios (termed “common cases”) and rare outlier driving scenarios (termed “edge cases”). In order to develop and evaluate driving scene perception models at scale, and more importantly, covering potential edge cases from the real world, we take advantage of the MIT-AVT Clustered Driving Scene Dataset and build a subset for the semantic scene segmentation task. We hereby present the MIT DriveSeg (Semi-auto) Dataset: a large-scale video driving scene dataset, which contains 20,100 video frames with pixel-wise semantic annotation. We propose semi-automatic annotation approaches leveraging both manual and computational efforts to annotate the data more efficiently and at lower cost than manual annotation.
Dataset Files
- DriveSeg (Semi-auto).zip (13.46 GB)
Open Access dataset files are accessible to all logged in users. Don't have a login? Create a free IEEE account. IEEE Membership is not required.
Documentation
Attachment | Size |
---|---|
MIT_DriveSeg_Semiauto.pdf | 2.73 MB |
Comments
/