We introduce HUMAN4D, a large and multimodal 4D dataset that contains a variety of human activities simultaneously captured by a professional marker-based MoCap, a volumetric capture and an audio recording system. By capturing 2 female and 2 male professional actors performing various full-body movements and expressions, HUMAN4D provides a diverse set of motions and poses encountered as part of single- and multi-person daily, physical and social activities (jumping, dancing, etc.), along with multi-RGBD (mRGBD), volumetric and audio data. Despite the existence of multi-view color datasets c

Instructions: 

* At this moment, the paper of this dataset is under review. The dataset is going to be fully published along with the publication of the paper, while in the meanwhile, more parts of the dataset will be uploaded.

The dataset includes multi-view RGBD, 3D/2D pose, volumetric (mesh/point-cloud/3D character) and audio data along with metadata for spatiotemporal alignment.

The full dataset is splitted per subject and per activity per modality.

There are also two benchmarking subsets, H4D1 for single-person and H4D2 for two-person sequences, respectively.

The fornats are:

  • mRGBD: *.png
  • 3D/2D poses: *.npy
  • volumetric (mesh/point-cloud/): *.ply
  • 3D character: *.fbx
  • metadata: *.txt, *.json

 

Categories:
788 Views

The dataset contains medical signs of the sign language including different modalities of color frames, depth frames, infrared frames, body index frames, mapped color body on depth scale, and 2D/3D skeleton information in color and depth scales and camera space. The language level of the signs is mostly Word and 55 signs are performed by 16 persons two times (55x16x2=1760 performance in total).

Instructions: 

 

The signs are collected at Shahid Beheshti University, Tehran, and show local gestures. The SignCol software (code: https://github.com/mohaEs/SignCol , paper: https://doi.org/10.1109/ICSESS.2018.8663952 ) is used for defining the signs and also connecting to Microsoft Kinect v2 for collecting the multimodal data, including frames and skeletons. Two demonstration videos of the signs are available at youtube: vomit: https://youtu.be/yl6Tq7J9CH4 , asthma spray: https://youtu.be/PQf8p_YNYfo . Demonstration videos of the SignCol are also available at https://youtu.be/_dgcK-HPAak and https://youtu.be/yMjQ1VYWbII .

The dataset contains 13 zip files totally: One zipfile contains readme, sample codes and data (Sample_AND_Codes.zip), the next zip file contains sample videos (Sample_Videos.zip) and other 11 zip files contain 5 signs in each (e.g. Signs(11-15).zip). For quick start, consider the Sample_AND_Codes.zip.

Each performed gesture is located in a directory named in Sign_X_Performer_Y_Z format which shows the Xth sign performed by the Yth person at the Znd iteration (X=[1,...,55], Y=[1,...,16], Z=[1,2]). The actual names of the signs are listed in the file: table_signs.csv.

Each directory includes 7 subdirectories:

1.      Times: time information of frames saved in CSV file.

2.      Color Frames: RGB frames saved in 8 bits *.jpg format with the size of 1920x1080.

3.      Infrared Frames: Infrared frames saved in 8 bits *.jpg format with the size of 512x424.

4.      Depth Frames: Depth frames saved in 8 bits *.jpg format with the size of 512x424.

5.      Body Index Frames: Body Index frames scaled in depth saved in 8 bits *.jpg format with the size of 512x424.

6.      Body Skels data: For each frame, there is a CSV file containing 25 rows according to 25 joints of body and columns for specifying the joint type, locations and space environments. Each joint location is saved in three spaces, 3D camera space, 2D depth space (image) and 2D color space (image). The 21 joints are visible in this dataset.

7.      Color Body Frames: frames of RGB Body scaled in depth frame saved in 8 bits *.jpg format with the size of 512x424.

 

Frames are saved as a set of numbered images and the MATLAB script PrReadFrames_AND_CreateVideo.m shows how to read frames and also how to create videos, if is required.

The 21 visible joints are Spine Base, Spine Mid, Neck, Head, Shoulder Left, Elbow Left, Wrist Left, Hand Left, Shoulder Right, Elbow Right, Wrist Right, Hand Right, Hip Left, Knee Left, Hip Right, Knee Right, Spine Shoulder, Hand TipLeft, Thumb Left, Hand Tip Right, Thumb Right. The MATLAB script PrReadSkels_AND_CreateVideo.m shows an example of reading joint’s informtaion, fliping them and drawing the skeleton on depth and color scale.

The updated information about the dataset and corresponding paper are available at GitHub repository MedSLset.

Terms and conditions for the use of dataset: 

1- This dataset is released for academic research purposes only.

2- Please cite both the paper and dataset if you found this data useful for your research. You can find the references and bibtex at MedSLset.

3- You must not distribute the dataset or any parts of it to others. 

4- The dataset just inclues image, text and video files and is scanned via malware protection softwares. You accept full responsibility for your use of the dataset. This data comes with no warranty or guarantee of any kind, and you accept full liability.

5- You will treat people appearing in this data with respect and dignity.

6- You will not try to identify and recognize the persons in the dataset.

Categories:
685 Views

We introduce a new robotic RGBD dataset with difficult luminosity conditions: ONERA.ROOM. It comprises RGB-D data (as pairs of images) and corresponding annotations in PASCAL VOC format (xml files)

It aims at People detection, in (mostly) indoor and outdoor environments. People in the field of view can be standing, but also lying on the ground as after a fall.

Instructions: 

To facilitate use of some deep learning softwares, a folder tree with relative symbolic link (thus avoiding extra space) will gather all the sequences in three folders : | |— image |        | — sequenceName0_imageNumber_timestamp0.jpg |        | — sequenceName0_imageNumber_timestamp1.jpg |        | — sequenceName0_imageNumber_timestamp2.jpg |        | — sequenceName0_imageNumber_timestamp3.jpg |        | — … | |— depth_8bits |        | — sequenceName0_imageNumber_timestamp0.png |        | — sequenceName0_imageNumber_timestamp1.png |        | — sequenceName0_imageNumber_timestamp2.png |        | — sequenceName0_imageNumber_timestamp3.png |        | — … | |— annotations |        | — sequenceName0_imageNumber_timestamp0.xml |        | — sequenceName0_imageNumber_timestamp1.xml |        | — sequenceName0_imageNumber_timestamp2.xml |        | — sequenceName0_imageNumber_timestamp3.xml |        | — … |

Categories:
189 Views

PRECIS HAR represents a RGB-D dataset for human activity recognition, captured with the 3D camera Orbbec Astra Pro. It consists of 16 different activities (stand up, sit down, sit still, read, write, cheer up, walk, throw paper, drink from a bottle, drink from a mug, move hands in front of the body, move hands close to the body, raise one hand up, raise one leg up, fall from bed, and faint), performed by 50 subjects.

Instructions: 

The dataset consists of RGB data (.mp4 files) and depth data (.oni files). We provide both cropped and raw versions. The cropped videos are shorter, containing only the seconds of interest, i.e. where the activity is performed. The raw videos are longer, containing all the video that we captured while filming the dataset. We included both variants, because they can all be useful for different applications.

Video names follow the pattern <subject_id>_<activity_id>.<extension>, where:

  • <subject_id> is an integer between 1 and 50;

  • <activity_id> is an integer between 1 and 16, with the following mapping: 1 = stand up, 2 = sit down, 3 = sit still, 4 = read, 5 = write, 6 = cheer up, 7 = walk, 8 = throw paper, 9 = drink from a bottle, 10 = drink from a mug, 11 = move hands in front of the body, 12 = move hands close to the body, 13 = raise one hand up, 14 = raise one leg up, 15 = fall from bed, 16 = faint;

  • <extension> is .mp4 or .oni, depending on the type of data (RGB or depth).

 In order to manipulate .oni files, we recommend using pyoni.

Categories:
2344 Views

We proposed a new dataset, HazeRD, for benchmarking dehazing algorithms under realistic haze conditions. As opposed to prior datasets that made use of synthetically generated images or indoor images with unrealistic parameters for haze simulation, our outdoor dataset allows for more realistic simulation of haze with parameters that are physically realistic and justified by scattering theory. 

Instructions: 

I) Installation:Unzip the source code archive. This will create a sub-directory "HazeRD", which is intended to be the directory where you run the MATLAB script. II(a) HazeRD Dataset Generation:Run script demo_simu_haze.m to generate the HazeRD datasetII(b) Computing fidelity metrics for dehazed images with respect to originals:Run script demo_metrics.m to compute the fidelity metrics for dehazed images.Please see the README.txt for detailed instructions.

Categories:
2225 Views