Computer Vision

This database contains Synthetic High-Voltage Power Line Insulator Images.

There are two sets of images: one for image segmentation and another for image classification.

The first set contains images with different types of materials and landscapes, including the following landscape types: Mountains, Forest, Desert, City, Stream, Plantation. Each of the above-mentioned landscape types consists of 2,627 images per insulator type, which can be Ceramic, Polymeric or made of Glass, with a total of 47,286 distinct images.


We present ViSnow: a large image dataset for snow-covered roads in an urban setting. The dataset includes an extensive collection of images from traffic surveillance cameras installed in Montreal, Quebec, Canada, during the winters of 2022 and 2023. ViSnow dataset aims to enable computer vision applications in intelligent transportation and winter road maintenance. ViSnow comprises 294,000 images describing various settings spanning day and night periods, different weather conditions (snow, rain, clear), and multiple urban areas (residential, commercial, industrial).


In this study, an equatorial telescope with an aperture of 310 mm, which will be installed in Antarctica in 2024, is chosen as the research subject. The Hour angle that the telescope pointing at is in the range of t[0, 360], and that for the declination axis is [-90, 30].The dataset contains around 3,000 images. The overall workflow is to collect images of the telescope in various poses and then collect two of each pose of the telescope from the TCS side of the telescope


SeaIceWeather Dataset 

This is the SeaIceWeather dataset, collected for training and evaluation of deep learning based de-weathering models. To the best of our knowledge, this is the first such publicly available dataset for the sea ice domain. This dataset is linked to our paper titled: Deep Learning Strategies for Analysis of Weather-Degraded Optical Sea Ice Images. The paper can be accessed at: 


DIRS24.v1 presents a dataset captured in campus environment. These images are curated suitably for the utilization in developing perception modules. These modules can be very well employed in Advanced Driver Assistance Systems (ADAS). The images of dataset are annotated in diversified formats such as COCO-MMDetection, Pascal-VOC, TensorFlow, YOLOv7-PyTorch, YOLOv8-Oriented Bounding Box, and YOLOv9.



The foundation of detection relies on the surface micro-defect images of KDP, and the effectiveness of the detection model depends on the quality of these images. Higher quality images can pinpoint the shape details and boundary features of defects, thereby enhancing the overall detection capability.


the first digitalized mammogram dataset for breast cancer in Saudi Arabia, depend on the BI-RADS categories, to solve the availability problem of local public datasets by collecting, categorizing, and annotating mammogram images, supporting the medical field by providing physicians with different diagnosed cases especially in Saudi Arabia


For the semantic segmentation to be effectively done, a labelled flood scene image dataset was created. This initiative was undertaken with official permission obtained from the BBC News Website and YouTube channel, providing a valuable dataset for our research. We were granted permission to use flood-related videos for research purposes, ensuring ethical and legal considerations. Specifically, videos were sourced from the BBC News YouTube channel. The obtained videos were then processed to extract image frames, resulting in a dataset comprising 10,854 images.


The Colour-Rendered Bosphorus Projections (CRBP) Face Dataset represents an innovative advancement in facial recognition and computer vision technologies. This dataset uniquely combines the precision of 3D face modelling with the detailed visual cues of 2D imagery, creating a multifaceted resource for various research activities. Derived from the acclaimed Bosphorus 3D Face Database, the CRBP dataset introduces colour-rendered projections to enrich the original dataset.


This work presents a new labeled dataset of videos with native and professional interpreters articulating words and expressions in Libras (Brazilian Sign Language). We used a methodology based on related studies, the support of the team of articulators, and the existing datasets in the literature.