AVDM Automated Vehicle Driver Monitoring Dataset

Citation Author(s):
Department Intelligent Transport Systems, Johannes Kepler University Linz, 4040 Linz, Austria
Department Intelligent Transport Systems, Johannes Kepler University Linz, 4040 Linz, Austria
Department Intelligent Transport Systems, Johannes Kepler University Linz, 4040 Linz, Austria
Submitted by:
Mohamed Sabry
Last updated:
Mon, 05/06/2024 - 04:29
Data Format:
0 ratings - Please login to submit your rating.


The JKU-ITS AVDM contains data from 17 participants performing different tasks with various levels of distraction.
The data collection was carried out in accordance with the relevant guidelines and regulations and informed consent was obtained from all participants.
The dataset was collected using the JKU-ITS research vehicle with automated capabilities under different illumination and weather conditions along a secure test route within the
JKU campus. They were asked to perform the 8 activities (listed below), which included manual driving and 7 non-driving-related tasks, while the vehicle autonomously navigated along a specified test route within the campus premises.

The activities in the dataset:
• Manual Driving as a baseline.
• Sitting still in the driver seat
• Using a phone for browsing, etc
• Initiate a call on a phone
• Reading a magazine
• Reading a newspaper
• Reading a book
• Drinking a beverage from a bottle



The duration of the aforementioned recorded data was of 200 minutes in the form of RGB videos with their respective RGB image folders.
Collection sessions were conducted between 10 am and 5 pm, encompassing both rainy and sunny conditions.


The data consists of two sets of labels defined as follows:

1. Following the Charades dataset format to provide detailed information about each action instance in the video as exemplarized below:

"s01v02": {
"subset": "training",
"duration": 44.0,
"actions": [  [5, 2.0, 44.0]  ] }

2. Per-Frame Labeling:

Frame, Timestamp,    Action
  0,   1683204895, sitting_still
  1,   1683204895, sitting_still

The format of the dataset and its structure is shown below:

.1 Root Data Directory.
    .2 s{Participant No. 1} .
        .3 s{Participant No. 1}v{Video No. 01}.webm.
        .3 s{Participant No. 1}v{Video No. 01}.csv.
        .3 s{Participant No. 1}v{Video No. 01}.
        .3 ....
        .3 s{Participant No. 1}v{Video No. N}.webm.
        .3 s{Participant No. 1}v{Video No. N}.csv.
        .3 s{Participant No. 1}v{Video No. N}.
    .2 ....
    .2 s{Participant No. 17}.
        .3 s{Participant No. 17}v{Video No. 01}.webm.
        .3 s{Participant No. 17}v{Video No. 01}.csv.
        .3 s{Participant No. 17}v{Video No. 01}.
        .3 .... .
        .3 s{Participant No. 17}v{Video No. N}.webm.
        .3 s{Participant No. 17}v{Video No. N}.csv.
        .3 s{Participant No. 17}v{Video No. N}.
    .2 actions.names.
    .2 labels.json.

The csv files contain per frame labels, and the labels.json contain the labels of each video in the dataset.

Funding Agency: 
This work was partially supported by the Austrian Science Fund (FWF), project number P 34485-N.

Dataset Files

Open Access dataset files are accessible to all logged in  users. Don't have a login?  Create a free IEEE account.  IEEE Membership is not required.