A First-Person Optical Flow Video Dataset for the 3-Meter Timed Up and Go (TUG) Test

Citation Author(s):
Hsu
Cheng-Huang
Chu
Edward T.-H.
Lee
Chia-Rong
Submitted by:
Hsu Cheng-Huang
Last updated:
Thu, 04/24/2025 - 20:51
DOI:
10.21227/5tj5-ad84
Data Format:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

This dataset aims to support research on temporal segmentation of the Timed Up and Go (TUG) test using a first-person wearable camera. The data collection includes a training set of 8 participants and a test set of 60 participants. Among the 8 participants, the test was completed at both a normal walking pace and a simulated slower walking pace to mimic elderly movement patterns. The 60 participants were randomly divided into two groups: one group completed the test at a normal walking pace, and the other group simulated slower walking speed to mimic elderly movement patterns. Video data were captured using the Realtek AMB82-mini AI Camera and preprocessed using the Farneback algorithm to extract optical flow features for subsequent analysis.

Each participant's dataset includes motion feature points and segments the TUG test into six distinct phases: standing up, walking (outbound), turning (outbound), walking (return), turning (return), and sitting down. This dataset can be used for the development and validation of machine learning models for six-phase classification, feature extraction algorithms, and signal analysis techniques.

The dataset is fully anonymized and has been ethically approved by the Human Research Ethics Committee of National Chung Cheng University.

Instructions: 

This dataset consists of 8 training sets and 60 test sets. Each of the 8 training sets contains 20 data points, while each of the 60 test sets contains 10 data points. Each data point includes the initial feature vector extracted from first-person perspective images, the smoothed feature vector, the normalized feature vector, and time-related features.

Each data point represents image data sampled at 1920×1080 resolution and 30 FPS. The training and test sets are stored in two separate folders:

The TRAIN folder contains three subfolders: TRAIN_DATASET, TRAIN_SEGMENTATION, and TRAIN_VIDEO.

  • TRAIN_DATASET: Contains the initial optical flow data, smoothed optical flow data, and normalized optical flow data.
  • TRAIN_SEGMENTATION: Contains the segmentation times for the six phases of the Timed Up and Go (TUG) test, with the segmentation times expressed in frames.
  • TRAIN_VIDEO: Contains the experimental videos of the TUG test.

The dataset filenames are numbered, with numbers 1 to 8 representing normal walking, and numbers 9 to 16 representing simulated elderly walking.

The TEST folder contains three subfolders: TEST_DATASET, TEST_SEGMENTATION, and TEST_VIDEO.

  • TEST_DATASET: Contains the initial optical flow data, smoothed optical flow data, and normalized optical flow data.
  • TEST_SEGMENTATION: Contains the segmentation times for the six phases of the Timed Up and Go (TUG) test, with the segmentation times expressed in frames.
  • TEST_VIDEO: Contains the experimental videos of the TUG test.

The data within the folders are divided into two groups: normal participants and elderly participants. Each data point within each group is numbered, with numbers ranging from 1 to 30.