FallAllD is a large open dataset of human falls and activities of daily living simulated by 15 participants. FallAllD consists of 26420 files collected using three data-loggers worn on the waist, wrist and neck of the subjects. Motion signals are captured using an accelerometer, gyroscope, magnetometer and barometer with efficient configurations that suit the potential applications e.g. fall detection, fall prevention and human activity recognition.

Instructions: 

Data files are stored in comma-separated values (csv) format. We developed two tools for encapsulating the data. The first one, namely FallAllD_Files_to_Matlab_Struct, is a MATLAB script that converts the dataset into a MATLAB structure stored as a ”.mat” file. The structure contains 8 fields {SubjectID, ActivityID, TrialNo, Device, Acc, Gyr, Mag, Bar}. The second tool, namely FallAllD_Files_to_Python_Struct, is a Python script that converts the dataset into a Pandas dataframe stored in hdf (”.h5”) or pickle (”.pkl”) formats. The dataframe has the same fields as the MATLAB structure.

To get familiar with FallAllD, use the MATLAB script Plot_FallAllD_Register to show any register of this dataset. 

If you use this dataset, please cite the following publication:

M. Saleh, M. Abbas and R. L. B. Jeannès, "FallAllD: An Open Dataset of Human Falls and Activities of Daily Living for Classical and Deep Learning Applications," in IEEE Sensors Journal, doi: 10.1109/JSEN.2020.3018335.

Categories:
2065 Views

PRECIS HAR represents a RGB-D dataset for human activity recognition, captured with the 3D camera Orbbec Astra Pro. It consists of 16 different activities (stand up, sit down, sit still, read, write, cheer up, walk, throw paper, drink from a bottle, drink from a mug, move hands in front of the body, move hands close to the body, raise one hand up, raise one leg up, fall from bed, and faint), performed by 50 subjects.

Instructions: 

The dataset consists of RGB data (.mp4 files) and depth data (.oni files). We provide both cropped and raw versions. The cropped videos are shorter, containing only the seconds of interest, i.e. where the activity is performed. The raw videos are longer, containing all the video that we captured while filming the dataset. We included both variants, because they can all be useful for different applications.

Video names follow the pattern <subject_id>_<activity_id>.<extension>, where:

  • <subject_id> is an integer between 1 and 50;

  • <activity_id> is an integer between 1 and 16, with the following mapping: 1 = stand up, 2 = sit down, 3 = sit still, 4 = read, 5 = write, 6 = cheer up, 7 = walk, 8 = throw paper, 9 = drink from a bottle, 10 = drink from a mug, 11 = move hands in front of the body, 12 = move hands close to the body, 13 = raise one hand up, 14 = raise one leg up, 15 = fall from bed, 16 = faint;

  • <extension> is .mp4 or .oni, depending on the type of data (RGB or depth).

 In order to manipulate .oni files, we recommend using pyoni.

Categories:
2344 Views

Multi-modal Exercises Dataset is a multi- sensor, multi-modal dataset, implemented to benchmark Human Activity Recognition(HAR) and Multi-modal Fusion algorithms. Collection of this dataset was inspired by the need for recognising and evaluating quality of exercise performance to support patients with Musculoskeletal Disorders(MSD).The MEx Dataset contains data from 25 people recorded with four sensors, 2 accelerometers, a pressure mat and a depth camera.

Instructions: 

The MEx Multi-modal Exercise dataset contains data of 7 different physiotherapy exercises, performed by 30 subjects recorded with 2 accelerometers, a pressure mat and a depth camera.

Application

The dataset can be used for exercise recognition, exercise quality assessment and exercise counting, by developing algorithms for pre-processing, feature extraction, multi-modal sensor fusion, segmentation and classification.

 

Data collection method

Each subject was given a sheet of 7 exercises with instructions to perform the exercise at the beginning of the session. At the beginning of each exercise the researcher demonstrated the exercise to the subject, then the subject performed the exercise for maximum 60 seconds while being recorded with four sensors. During the recording, the researcher did not give any advice or kept count or time to enforce a rhythm.

 

Sensors

Obbrec Astra Depth Camera 

-       sampling frequency – 15Hz 

-       frame size – 240x320

 

Sensing Tex Pressure Mat

-       sampling frequency – 15Hz

-       frame size – 32*16

Axivity AX3 3-Axis Logging Accelerometer

-       sampling frequency – 100Hz

-       range – 8g

 

Sensor Placement

All the exercises were performed lying down on the mat while the subject wearing two accelerometers on the wrist and the thigh. The depth camera was placed above the subject facing down-words recording an aerial view. Top of the depth camera frame was aligned with the top of the pressure mat frame and the subject’s shoulders such that the face will not be included in the depth camera video.

 

Data folder

MEx folder has four folders, one for each sensor. Inside each sensor folder,

30 folders can be found, one for each subject. In each subject folder, 8 files can be found for each exercise with 2 files for exercise 4 as it is performed on two sides. (The user 22 will only have 7 files as they performed the exercise 4 on only one side.)  One line in the data files correspond to one timestamped and sensory data.

 

Attribute Information

 

The 4 columns in the act and acw files is organized as follows:

1 – timestamp

2 – x value

3 – y value

4 – z value

Min value = -8

Max value = +8

 

The 513 columns in the pm file is organized as follows:

1 - timestamp

2-513 – pressure mat data frame (32x16)

Min value – 0

Max value – 1

 

The 193 columns in the dc file is organized as follows:

1 - timestamp

2-193 – depth camera data frame (12x16)

 

dc data frame is scaled down from 240x320 to 12x16 using the OpenCV resize algorithm

Min value – 0

Max value – 1

Categories:
796 Views

In an aging population, the demand for nurse workers increases to care for elders. Helping nurse workers make their work more efficient, will help increase elders quality of life, as the nurses can focus their efforts on care activities instead of other activities such as documentation.
Activity Recognition can be used for this goal. If we can recognize what activity a nurse is engaged in, we can partially automate documentation process to reduce time spent on this task, monitor care plan compliance to assure that all care activities have been done for each elder, among others.

Last Updated On: 
Fri, 12/06/2019 - 03:40

A new dataset named Sanitation is released to evaluate the HAR algorithm’s performance and benefit the researchers in this field, which collects seven types of daily work activity data from sanitation workers.We provide two .csv files, one is the raw dataset “sanitation.csv”, the other is the pre-processed features dataset which is suitable for machine learning based human activity recognition methods.

Categories:
613 Views

This dataset is a highly versatile and precisely annotated large-scale dataset of smartphone sensor data for multimodal locomotion and transportation analytics of mobile users.

The dataset comprises 7 months of measurements, collected from all sensors of 4 smartphones carried at typical body locations, including the images of a body-worn camera, while 3 participants used 8 different modes of transportation in the southeast of the United Kingdom, including in London.

Categories:
956 Views

Recognition of human activities is one of the most promising research areas in artificial intelligence. This has come along with the technological advancement in sensing technologies as well as the high demand for applications that are mobile, context-aware, and real-time. We have used a smart watch (Apple iWatch) to collect sensory data for 14 ADL activities (Activities of Daily Living). 

Categories:
1023 Views