MEx - Multi-modal Exercise Dataset

MEx - Multi-modal Exercise Dataset

Citation Author(s):
Anjana
Wijekoon
School of Computing and Digital Media, Robert Gordon University, Aberdeen, UK
Nirmalie
Wiratunga
School of Computing and Digital Media, Robert Gordon University, Aberdeen, UK
Kay
Cooper
School of Health Sciences, Robert Gordon University, Aberdeen, UK
Submitted by:
Anjana Wijekoon
Last updated:
Tue, 10/01/2019 - 10:57
DOI:
10.21227/h7g2-a333
Data Format:
License:
Dataset Views:
294
Share / Embed Cite

Multi-modal Exercises Dataset is a multi- sensor, multi-modal dataset, implemented to benchmark Human Activity Recognition(HAR) and Multi-modal Fusion algorithms. Collection of this dataset was inspired by the need for recognising and evaluating quality of exercise performance to support patients with Musculoskeletal Disorders(MSD).The MEx Dataset contains data from 25 people recorded with four sensors, 2 accelerometers, a pressure mat and a depth camera. Seven different exercises that are highly recommended by physiotherapists for patients with low-back pain were selected for this data collection. Two accelerometers were placed on the wrist and the thigh of the person and they performed exercises on the pressure mat while being recorded by a depth camera from top. One person performed one exercise for maximum 60 seconds. The dataset contains three data modalities; numerical time-series data, video data and pressure sensor data posing interesting research challenges when reasoning for HAR and Exercise Quality Assessment. With the recent advancement in multi-modal fusion, we also believe MEx is instrumental in benchmarking not only HAR algorithms, but also fusion algorithms of heterogeneous data types in multiple application domains.

 

Instructions: 

The MEx Multi-modal Exercise dataset contains data of 7 different physiotherapy exercises, performed by 30 subjects recorded with 2 accelerometers, a pressure mat and a depth camera.

Application

The dataset can be used for exercise recognition, exercise quality assessment and exercise counting, by developing algorithms for pre-processing, feature extraction, multi-modal sensor fusion, segmentation and classification.

 

Data collection method

Each subject was given a sheet of 7 exercises with instructions to perform the exercise at the beginning of the session. At the beginning of each exercise the researcher demonstrated the exercise to the subject, then the subject performed the exercise for maximum 60 seconds while being recorded with four sensors. During the recording, the researcher did not give any advice or kept count or time to enforce a rhythm.

 

Sensors

Obbrec Astra Depth Camera 

-       sampling frequency – 15Hz 

-       frame size – 240x320

 

Sensing Tex Pressure Mat

-       sampling frequency – 15Hz

-       frame size – 32*16

Axivity AX3 3-Axis Logging Accelerometer

-       sampling frequency – 100Hz

-       range – 8g

 

Sensor Placement

All the exercises were performed lying down on the mat while the subject wearing two accelerometers on the wrist and the thigh. The depth camera was placed above the subject facing down-words recording an aerial view. Top of the depth camera frame was aligned with the top of the pressure mat frame and the subject’s shoulders such that the face will not be included in the depth camera video.

 

Data folder

MEx folder has four folders, one for each sensor. Inside each sensor folder,

30 folders can be found, one for each subject. In each subject folder, 8 files can be found for each exercise with 2 files for exercise 4 as it is performed on two sides. (The user 22 will only have 7 files as they performed the exercise 4 on only one side.)  One line in the data files correspond to one timestamped and sensory data.

 

Attribute Information

 

The 4 columns in the act and acw files is organized as follows:

1 – timestamp

2 – x value

3 – y value

4 – z value

Min value = -8

Max value = +8

 

The 513 columns in the pm file is organized as follows:

1 - timestamp

2-513 – pressure mat data frame (32x16)

Min value – 0

Max value – 1

 

The 193 columns in the dc file is organized as follows:

1 - timestamp

2-193 – depth camera data frame (12x16)

 

dc data frame is scaled down from 240x320 to 12x16 using the OpenCV resize algorithm

Min value – 0

Max value – 1

Dataset Files

You must login with an IEEE Account to access these files. IEEE Accounts are FREE.

Sign Up now or login.

Documentation

AttachmentSize
PDF icon Exercises.pdf47.07 KB

Embed this dataset on another website

Copy and paste the HTML code below to embed your dataset:

Share via email or social media

Click the buttons below:

facebooktwittermailshare
[1] Anjana Wijekoon, Nirmalie Wiratunga, Kay Cooper, "MEx - Multi-modal Exercise Dataset", IEEE Dataport, 2019. [Online]. Available: http://dx.doi.org/10.21227/h7g2-a333. Accessed: Dec. 12, 2019.
@data{h7g2-a333-19,
doi = {10.21227/h7g2-a333},
url = {http://dx.doi.org/10.21227/h7g2-a333},
author = {Anjana Wijekoon; Nirmalie Wiratunga; Kay Cooper },
publisher = {IEEE Dataport},
title = {MEx - Multi-modal Exercise Dataset},
year = {2019} }
TY - DATA
T1 - MEx - Multi-modal Exercise Dataset
AU - Anjana Wijekoon; Nirmalie Wiratunga; Kay Cooper
PY - 2019
PB - IEEE Dataport
UR - 10.21227/h7g2-a333
ER -
Anjana Wijekoon, Nirmalie Wiratunga, Kay Cooper. (2019). MEx - Multi-modal Exercise Dataset. IEEE Dataport. http://dx.doi.org/10.21227/h7g2-a333
Anjana Wijekoon, Nirmalie Wiratunga, Kay Cooper, 2019. MEx - Multi-modal Exercise Dataset. Available at: http://dx.doi.org/10.21227/h7g2-a333.
Anjana Wijekoon, Nirmalie Wiratunga, Kay Cooper. (2019). "MEx - Multi-modal Exercise Dataset." Web.
1. Anjana Wijekoon, Nirmalie Wiratunga, Kay Cooper. MEx - Multi-modal Exercise Dataset [Internet]. IEEE Dataport; 2019. Available from : http://dx.doi.org/10.21227/h7g2-a333
Anjana Wijekoon, Nirmalie Wiratunga, Kay Cooper. "MEx - Multi-modal Exercise Dataset." doi: 10.21227/h7g2-a333