Computer Vision

The C3I Synthetic Human Dataset provides 48 female and 84 male synthetic 3D humans in fbx format generated from iClone 7 Character creator “Realistic Human 100” toolkit with variations in ethnicity, gender, race, age, and clothing. For each of these, it further provides the full-body model with five different facial expressions – Neutral, Angry, Sad, Happy, and Scared. Along with the body models, it also open-sources a data generation pipeline written in python to bring those models into a 3D Computer Graphics tool called Blender.

Categories:
671 Views

Dataset for segmentation of the defects on the surfaces of the military cartridge cases. The datasets with non-defective, defective and masked image classes of the defective cartridge cases.

Categories:
363 Views

This open dataset is subject to CC BY-NC-SA 4.0 License. The dataset is intended for scientific research purposes and it cannot be used for commercial purposes. The authors encourage users to use it for public research and as a testbench for private research. Please note that any promotional/marketing material built upon this dataset should be backed by publicly available description of the work leading to the promotional/marketing claims.

Categories:
2104 Views

Measuring the appearance time slots of characters in videos is still an unsolved problem in computer vision, and the related dataset is insufficient and unextracted. The Character Face In Video (CFIV) dataset provides the labeled appearing time slots for characters of interest for ten video clips on Youtube, two faces per character for training, and a script for downloading each video. Additionally, three videos contain around 100 images per character for evaluating the accuracy of the face recognizer.

Categories:
332 Views

The dataset contains results of the paper being submitted.

Categories:
114 Views

This project investigates bias in automatic facial recognition (FR). Specifically, subjects are grouped into predefined subgroups based on gender, ethnicity, and age. We propose a novel image collection called Balanced Faces in the Wild (BFW), which is balanced across eight subgroups (i.e., 800 face images of 100 subjects, each with 25 face samples).

Categories:
1791 Views

The experimental data in this paper comes from the bamboo sticks provided by farmers who sell bamboo in Anji. We randomly grab less than 100 bamboo sticks and bundle them together. The heights of 5cm, 10cm, 15cm, and 20cm were taken from the front and left and right inclination to take pictures, screen clear and effective experimental data, and then use labelimg software to label them. The sparse bamboo stick samples collected were 600.

Categories:
663 Views

This is the data for the paper "Fusion of Human Gaze and Machine Vision for Predicting Intended Locomotion Mode" published on IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2022. 

Categories:
254 Views

Vision is important for transitions between different locomotor controllers (e.g., level-ground walking to stair ascent) by sensing the environment prior to physical interactions. Here we developed StairNet to support the development and comparison of deep learning models for visual recognition of stairs. The dataset builds on ExoNet – the largest open-source dataset of egocentric images of real-world walking environments.

Categories:
2978 Views

Pages