DivNEDS: Diverse Naturalistic Edge Driving Scene Dataset for Autonomous Vehicle Scene Understanding

Citation Author(s):
John
Owusu Duah
Submitted by:
John Owusu Duah
Last updated:
Tue, 02/20/2024 - 18:31
DOI:
10.21227/za6j-a529
Data Format:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

The safe implementation and adoption of Autonomous Vehicle (AV) vision models on public roads requires not only an understanding of the natural environment comprising pedestrians and other vehicles but also the ability to reason about edge situations such as unpredictable maneuvers by other drivers, impending accidents, erratic movement of pedestrians, cyclists, and motorcyclists, animal crossings, and cyclists using hand signals. Despite advances in complex tasks such as object tracking, human behavior modeling, activity recognition, and trajectory planning, the fundamental challenge of interpretable scene understanding, especially in out-of-distribution environments, remains evident. This is highlighted by the 84% of AV disengagements attributed to scene understanding errors in real-world AV tests. To address this limitation, we introduce the Diverse Naturalistic Edge Driving Scene Dataset (DivNEDS), a novel dataset comprising 11,084 edge scenes and 203,000 descriptive captions sourced from 12 distinct locations worldwide, captured under varying weather conditions and at different times of the day. Our approach includes a novel embedded hierarchical dense captioning strategy aimed at enabling few-shot learning and mitigating overfitting by excluding irrelevant scene elements. Additionally, we propose a Generative Region- to-Text Transformer, with a baseline embedded hierarchical dense captioning performance of 60.3mAP, a new benchmark for AV scene understanding models trained on dense captioned data sets. This work represents a significant step toward improving AVs’ ability to comprehend diverse, real-world edge and complex driving scenarios, thereby enhancing their safety and adaptability in dynamic environments. 

Instructions: 

## Data

### Download Data

In the root directory,

~~~

 cd datasets

~~~

Download DivNEDS (prepared in COCO format) into datasets directory with the command below

~~~

 wget https://divneds.blob.core.windows.net/divneds/divneds.zip

~~~

Decompress DivNEDS with 

~~~

tar -xzf divneds.zip

~~~

Dataset strcture should look like:

~~~

${DivNEDS_Root}

|-- datasets

`-- |-- divneds

    |-- |-- images/

    |-- |-- annotations/

    |-- |-- |-- train.json

    |-- |-- |-- valid.json

    |-- |-- |-- test.json

Dataset Files

    Files have not been uploaded for this dataset