Computer Vision

Animal habitat surveys play a critical role in preserving the biodiversity of the land. One of the effective ways to gain insights into animal habitats involves identifying animal footprints, which offers valuable information about species distribution, abundance, and behavior.

Categories:
197 Views

In order to train the joint contrastive representation learning module, we constructe a large Text Annotated Distortion, Appearance and Content (TADAC) image database.

Categories:
374 Views

The LuFI-RiverSnap dataset includes close-range river scene images obtained from various devices, such as UAVs, surveillance cameras, smartphones, and handheld cameras, with sizes up to 4624 × 3468 pixels. Several social media images, which are typically volunteered geographic information (VGI), have also been incorporated into the dataset to create more diverse river landscapes from various locations and sources. 

 

Please see the following links: 

 

Categories:
617 Views

The dataset consists of around 335K real images equally distributed among 7 classes. The classes represent different levels of rain intensity, namely "Clear", "Slanting Heavy Rain", "Vertical Heavy Rain", "Slanting Medium Rain", "Vertical Medium Rain", "Slanting Low Rain", and "Vertical Low Rain". The dataset has been acquired during laboratory experiments and simulates a low-altitude flight. The system consists of a visual odometry system comprising a processing unit and a depth camera, namely an Intel Real Sense D435i.

Categories:
210 Views

Multi-gait recognition aims to identify persons by their walking styles when walking with other people. A person's gait easily changes a lot when walking with other people. The changes caused by walking with other people are different when walking with different persons, which brings great challenges to high-accuracy multi-gait recognition. Existing multi-gait recognition methods extract hand-crafted multi-gait features. Due to limit of multi-gait sample size and quality, there have not appeared multi-gait recognition methods based on deep learning.

Categories:
70 Views

Bengaluru has been ranked the most congested city in India in terms of traffic for several years now. This hackathon is aimed at creating innovative solutions to the traffic management problem in Bengaluru, and is being co-organised by the Bengaluru Traffic Police, the Centre for Data for Public Good, and the Indian Institute of Science (IISc). The prizes are being sponsored by the IEEE Foundation.

Last Updated On: 
Thu, 10/17/2024 - 05:18
Citation Author(s): 
Raghu Krishnapuram, Rakshit Ramesh, and Arun Josephraj

The dataset contains the focus metrics values of a comprehensive synthetic underwater image dataset (https://data.mendeley.com/datasets/2mcwfc5dvs/1). The image dataset has 100 ground-truth images and 15,000 synthetic underwater images generated by considering a comprehensive set of effects of underwater environment. The current dataset focus on the focus metrics of these 15,100 images.

Categories:
156 Views

This is the collection of the Ecuadorian Traffic Officer Detection Dataset. This can be used mainly on Traffic Officer detection projects using YOLO. Dataset is in YOLO format. There are 1862 total images in this dataset fully annotated using  Roboflow Labeling tool.  Dataset is split as follow, 1734 images for training, 81 images for validation and 47 images for testing. Dataset is annotated only as one class-Traffic Officer (EMOV). The dataset produced a Mean Average Precision(mAP) of 96.4 % using YOLOv3m, 99.0 % using YOLOv5x  and 98.10 % using YOLOv8x.

Categories:
362 Views

We present the SinOCR and SinFUND datasets, two comprehensive resources designed to advance Optical Character Recognition (OCR) and form understanding for the Sinhala language. SinOCR, the first publicly available and the most extensive dataset for Sinhala OCR to date, includes 100,000 images featuring printed text in 200 different Sinhala fonts and 1,135 images of handwritten text, capturing a wide spectrum of writing styles.

Categories:
433 Views

This dataset consists of 462 field of views of Giemsa(dye)-stained and field(dye)-stained thin blood smear images acquired using an iPhone 10 mobile phone with a 12MP camera. The phone was attached to an Olympus microscope with 1000× objective lens. Half of the acquired images are red blood cells with a normal morphology and the other half have a Rouleaux formation morphology.

Categories:
874 Views

Pages