Skip to main content

Action recognition

This Dataset is a self-harm dataset developed by ZIOVISION Co. Ltd. It consists of 1,120 videos. Actors were hired to simulate self-harm behaviors, and the scenes were recorded using four cameras to ensure full coverage without blind spots. Self-harm behaviors in the dataset are limited to "cutting" actions targeting specific body parts. The designated self-harm areas include the wrists, forearms, and thighs.

 The full dataset can be accesssed through https://github.com/zv-ai/ZV_Self-harm-Dataset.git

Categories:

Most of the existing human action datasets are common human actions in daily scenes(e.g. NTU RGB+D series, Kinetics series), not created for Human-Robot Interaction(HRI), and most of them are not collected based on the perspective of the service robot, which can not meet the needs of vision-based interactive action recognition.

This dataset is named as “Human-Robot Interactive Action Dataset From The Perspective Of Service Robot (THU-HRIA dataset) “ ,which is created for indoor service robot action interaction, collected by Tsinghua University.

Categories:

The dataset consists of 751 videos, each containing the performance one of the handball actions out of 7 categories (passing, shooting, jump-shot, dribbling, running, crossing, defence). The videos were manually extracted from longer videos recorded in handball practice sessions. 

The scenes were shot with stationary GoPro cameras that were mounted on the left or right side of the playground, from different angles. The videos were recorded in at least full HD (1920 × 1080) resolution at 30 or more frames per second. 

Categories: