Opportunity ++: A Multimodal Dataset for Video- and Wearable, Object and Ambient Sensors-based Human Activity Recognition

Citation Author(s):
Mathias
Ciliberto
University of Sussex
Vitor
Fortes Rey
Deutsches Forschungszentrum für Künstliche Intelligenz
Alberto
Calatroni
Lucerne University of Applied Sciences and Arts
Paul
Lukowicz
Deutsches Forschungszentrum für Künstliche Intelligenz
Daniel
Roggen
University of Sussex
Submitted by:
DANIEL ROGGEN
Last updated:
Fri, 11/26/2021 - 08:45
DOI:
10.21227/vd6r-db31
Data Format:
Links:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

Opportunity++ is a precisely annotated dataset designed to support AI and machine learning research focused on the multimodal perception and learning of human activities (e.g. short actions, gestures, modes of locomotion, higher-level behavior).

The Opportunity++ dataset is a significant multimodal extension of the original OPPORTUNITY Activity Recognition Dataset available at https://archive.ics.uci.edu/ml/datasets/OPPORTUNITY+Activity+Recognition. Opportunity++ includes the original video recordings as well as video-derived skeleton tracking data. This enables a wide-range of novel multimodal activity recognition research based on video data, ambient- and object-integrated sensors and wearable sensors (classification, automatic data segmentation, sensor fusion, feature extraction, etc).

This release includes:

  • Body-worn sensors: 7 inertial measurement units, 12 3D acceleration sensors, 4 3D localization information
  • Object sensors: 12 objects with 3D acceleration and 2D rate of turn
  • Ambient sensors: 13 switches and 8 3D acceleration sensors
  • Newly released anonymized side-view videos
  • Newly released OpenPose tracks for all the people in the videos. This includes the coordinates of the joints (nose, neck, …) of all the users in the video frames.

The dataset included data from 4 users performing everyday living activities in a kitchen environment. For each user the dataset includes 6 different runs. Five runs, termed Activity of Daily Living (ADL), followed a given scenario as detailed below. The sixth termed Drill Run, was designed to generate a large number of activity instances in a more constrained scenario. The ADL run consists of temporally unfolding situations. In each situation (e.g. preparing sandwich), a large number of action primitives occur (e.g. reach for bread, move to bread cutter, operate bread cutter).

The dataset includes a total of 19.75 hours of sensor data annotated with multiple tracks: 1.88 hours of actions performed with any of the two hands, 6.01 hours of locomotion status, 3.02 hours of annotated data for action performed with a specific hand and 4.89 hours of high level activities. Moreover, the sensors placed on the objects produced a total of 3.92 hours of annotated data.

Overall, the dataset comprise of more than 24000 unique annotations, divided in 2551 activity instances performed with any of the left or right hands, 3653 activity instances of locomotion, 12242 action instances performed with a single specific hand, 122 instances of high level activities and 6103 instances of interaction with the object in the kitchen.

Instructions: 

Complete documentation is provided in the readme.

Dataset Files

LOGIN TO ACCESS DATASET FILES
Open Access dataset files are accessible to all logged in  users. Don't have a login?  Create a free IEEE account.  IEEE Membership is not required.

Documentation

AttachmentSize
File README.md22.16 KB
File README.pdf262.61 KB