Computer Vision

This dataset features a wide range of synthetic American Sign Language (ASL) digits, spanning numbers 0 through 9. These ASL sign representations were meticulously crafted using Unity software, resulting in dynamic 3-D scenes set against diverse backgrounds. To enhance the dataset's comprehensiveness, it includes contributions from three distinct subjects, adding a rich variety of ASL digit gestures. This diversity makes it a valuable resource for researchers interested in ASL digit recognition and gesture analysis.


Quantifying performance of methods for tracking and mapping tissue in endoscopic environments is essential for enabling image guidance and automation of medical interventions and surgery. Datasets developed so far either use rigid environments, visible markers, or require annotators to label salient points in videos after collection. These are respectively: not general, visible to algorithms, or costly and error-prone. We introduce a novel labeling methodology along with a dataset that uses said methodology, Surgical Tattoos in Infrared (STIR).


This dataset was used to support our work and provided to the review for reference.


Recognizing and categorizing banknotes is a crucial task, especially for individuals with visual impairments. It plays a vital role in assisting them with everyday financial transactions, such as making purchases or accessing their workplaces or educational institutions. The primary objectives for creating this dataset were as follows:


This dataset contains video-clips of five volunteers developing daily life activities. Each video-clip is recorded with a Far InfraRed (FIR) camera and includes an associated file which contains the three-dimensional and two-dimensional coordinates of the main body joints in each frame of the clip. This way, it is possible to train human pose estimation networks using FIR imagery.


SYPHAXAR dataset is a dataset for Arabic text detection in the wild. It was collected from Tunisia in “Sfax” city, the second largest Tunisian city after the capital. A total of 3078 images were gathered through manual collection one by one, with each image energizing text detection challenges in nature according to real existing complexity of 15 different routes along with ring roads, intersections and roundabouts. These annotated images consist of more than 31000 objects, each of which is enclosed within a bounding box.


It is important to accurately classify the defects in hot rolled steel strip since the detection of defects in hot rolled steel strip is closely related to the quality of the fifinal product. The lack of actual hot-rolled strip defect data sets currently limits further research on the classifification of hot-rolled strip defects to some extent. In real production, the convolutional neural network (CNN)-based algorithm has some diffificulties, for example, the algorithm is not particularly accurate in classifying some uncommon defects.


Blade damage inspection without stopping the normal operation of wind turbines has significant economic value. This study proposes an AI-based method AQUADA-Seg to segment the images of blades from complex backgrounds by fusing optical and thermal videos taken from normal operating wind turbines. The method follows an encoder-decoder architecture and uses both optical and thermal videos to overcome the challenges associated with field application.


This dataset provides RGB and Depth images acquired by Kinect v2 of 10 cerebral palsy patients. For each subject (0001, 0002, ecc) there are 12 folders: 

- 5 folders containing 5 left full gait cycles (L_01, L_02, ecc)

- 5 folders containing 5 right full gait cycles (R_01, R_02. ecc)

- 1 folder containing one static lateral view (left side) of the subject while standing upright (L_s)

- 1 folder containing one static lateral view (right side) of the subject while standing upright  (R_s)

In each folder (dynamic and static) there are two subfolders:


We present the RQMD dataset, a comprehensive collection of diverse material samples aimed at advancing computer vision and machine learning algorithms in terrain classification tasks. This dataset contains RGB images of 5 different terrains, such as Asphalt, Brick, Grass, Gravel, and Tiles, captured using an 8-megapixel Raspberry Pi camera from a top-view perspective. Notably, the dataset encompasses images taken at different times of the day, introducing variations in lighting conditions and environmental factors.