This project investigates bias in automatic facial recognition (FR). Specifically, subjects are grouped into predefined subgroups based on gender, ethnicity, and age. We propose a novel image collection called Balanced Faces in the Wild (BFW), which is balanced across eight subgroups (i.e., 800 face images of 100 subjects, each with 25 face samples).


The experimental data in this paper comes from the bamboo sticks provided by farmers who sell bamboo in Anji. We randomly grab less than 100 bamboo sticks and bundle them together. The heights of 5cm, 10cm, 15cm, and 20cm were taken from the front and left and right inclination to take pictures, screen clear and effective experimental data, and then use labelimg software to label them. The sparse bamboo stick samples collected were 600.


This is the data for the paper "Fusion of Human Gaze and Machine Vision for Predicting Intended Locomotion Mode" published on IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2022. 


Computer vision can be used by robotic leg prostheses and exoskeletons to improve high-level transitions between different locomotion modes (e.g., level-ground walking to stair ascent) through the prediction of future environmental states. Here we developed the StairNet dataset to support research and development in vision-based automated stair recognition.


One of the weak points of most of denoising algoritms (deep learning based ones) is the training data. Due to no or very limited amount of groundtruth data available, these algorithms are often evaluated using synthetic noise models such as Additive Zero-Mean Gaussian noise. The downside of this approach is that these simple model do not represent noise present in natural imagery. For evaluation of denoising algorithms’ performance in poor light conditions, we need either representative models or real noisy images paired with those we can consider as groundtruth.


Retail Gaze, a dataset for remote gaze estimation in real-world retail environments. Retail Gaze is composed of 3,922 images of individuals looking at products in a retail environment, with 12 camera capture angles.Each image captures the third-person view of the customer and shelves. Location of the gaze point, the Bounding box of the person's head, segmentation masks of the gazed at product areas are provided as annotations.


A dataset with more comprehensive category labels, richer data scenes, and more diverse image sizes were constructed. All images had been labeled.
The num of all annotations is 8232. This dataset is openly accessible to all future research workers for rapid deployment of mask detection subtasks during the New Crown out- break and in all possible future scenarios.


In this paper, we propose a framework for 3D human pose estimation using a single 360° camera mounted on the user's wrist. Perceiving a 3D human pose with such a simple setup has remarkable potential for various applications (e.g., daily-living activity monitoring, motion analysis for sports training). However, no existing method has tackled this task due to the difficulty of estimating a human pose from a single camera image in which only a part of the human body is captured, and because of a lack of training data.


Document layout analysis (DLA) plays an important role for identifying and classifying the different regions of digital documents in the context of Document Understanding tasks. In light of this, SciBank seeks to provide a considerable amount  of data from text (abstract, text blocks, caption, keywords, reference, section, subsection, title), tables, figures and equations (isolated equations and inline equations) of 74435 scientific articles pages. Human curators validated that these 12 regions were properly labeled.

  1. Datasheet_for_SciBank_Dataset.pdf. The Datasheet for this Dataset includes all the relevant details of the composition, collection, preprocessing, cleaning and labeling process used to construct SciBank.
  2. METADATA_FINAL.csv. Each row represent the metadata for every region according to the following fields
    1. Folder: the name of the folder within the main folder PAPER_TAR
    2. Page: png filename of the image where the region is located
    3. Height_Page, Width_Page: dimensions in pixels of the png image page
    4. CoodX, CoodY, Width, Height: coordinates of the region in pixels 
    5. Class: region label
    6. Page_in_pdf: page number within the PDF containing the page of the region
  3. PAPER_TAR folder includes the PNG images from all paper pages and the PDF papers in hierarchical subdirectories, both referenced by METADATA_FINAL.csv.