Dataset with images of soccer ball acquired by a humanoid robot competing in the RoboCup Humanoid Kidsize League.

Instructions: 

Files in jpeg format.

Github source code also available at:

https://github.com/douglasrizzo/JINT2020-ball-detection

 

Categories:
1035 Views

This is a collection of paired thermal and visible ear images. Images in this dataset were acquired in different illumination conditions ranging between 2 and 10700 lux. There are total 2200 images of which 1100 are thermal images while the other 1100 are their corresponding visible images. Images consisted of left and right ear images of 55 subjects. Images were capture in 5 illumination conditiond for every subjects. This dataset was developed for illumination invariant ear recognition study. In addition it can also be useful for thermal and visible image fusion research.

 

Instructions: 

Any work made public, in whatever form, based directly or indirectly on any part of the DATABASE will include the following reference: 

Syed Zainal Ariffin, S. M. Z., Jamil, N., & Megat Abdul Rahman, P. N. (2016). DIAST Variability Illuminated Thermal and Visible Ear Images Datasets. In Proceeding of 2016 Signal Processing : Algorithms, Architectures, Arrangements, and Applications (SPA), 2016. DOI : 10.1109/SPA.2016.7763611

 

Categories:
164 Views

The is a dataset for indoor depth estimation that contains 1803 synchronized image triples (left, right color image and depth map), from 6 different scenes, including a library, some bookshelves, a conference room, a cafe, a study area, and a hallway. Among these images, 1740 high-quality ones are marked as high-quality imagery. The left view and the depth map are aligned and synchronized and can be used to evaluate monocular depth estimation models. Standard training/testing splits are provided.

Instructions: 

Please refer to the README file for detailed instructions.

Dataset usage must comply with the LICENSE provided.

Categories:
172 Views

A paradigm dataset is constantly required for any characterization framework. As far as we could possibly know, no paradigmdataset exists for manually written characters of Telugu Aksharaalu content in open space until now. Telugu content (Telugu: తెలుగు లిపి, romanized: Telugu lipi), an abugida from the Brahmic group of contents, is utilized to compose the Telugu language, a Dravidian language spoken in the India of Andhra Pradesh and Telangana just a few other neighboring states. The Telugu content is generally utilized for composing Sanskrit writings.

Categories:
1624 Views

Simulated Disaster Victim dataset consists of images and video frames containing simulated human victims in cluttered scenes along with pixel-level annotated skin maps. The simulation was carried out in a controlled environment with due consideration towards the health of all the volunteers. To generate a real effect of a disaster, Fuller’s earth is used which is skin-friendly and does not cause harm to humans. It created an effect of disaster dust over the victims in different situations. The victims included one female and four male volunteers.

Instructions: 

CSIR-CSIO Simulated Disaster Victim Dataset

This dataset was collected as part of research work on locating victims in catastrophic situations in different poses, occlusions and varied illumination conditions of simulated victims in images and video. The work and dataset is explained in paper “Data-driven Skin Detection in Cluttered Search & Rescue Environments” and Ph.D. thesis titled “Automated Detection of Disaster Victims in Cluttered Environments”. The dataset is divided in two parts: (a) SDV1 containing simulated disaster victim images with corresponding ground truth files, and (b) SDV2 dataset consisting of 15 video clips of simulated disaster victims.

 SDV1 dataset:

·        128 images (768x509) with 128 ground truth binary maps.

·        Five volunteers as victims (one female and four male).

SDV2 dataset:

·        15 video clips consisting of 6315 frames and 557 pixel level annotations of skin maps.

·        Each frame has a resolution of 960x540.

·        Ground truth binary maps available for random frame number in each sequence.

·        Five volunteers as victims (one female and four male).

 Note: Folder name ‘GT’ correspond to the ground truth folder

Disclaimer

THIS DATA SET IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.

·The research work was funded by Council of Scientific & Industrial Research (CSIR) and was carried out at CSIR-Central Scientific Instruments Organisation, Chandigarh. The research work was also supported by the Academy of Scientific & Innovation Research (AcSIR), India.

This dataset is the property of CSIR-CSIO, Chandigarh, India.

·It is to be used only for research purpose giving due credit by citation.

Categories:
645 Views

The benchmark dataset  are consisted of 2,413 three-channel RGB images obtained from Google Earth satellite images and AID dataset.

Categories:
300 Views

After a hurricane, damage assessment is critical to emergency managers and first responders so that resources can be planned and allocated appropriately. One way to gauge the damage extent is to detect and quantify the number of damaged buildings, which is traditionally done through driving around the affected area. This process can be labor intensive and time-consuming. In this paper, utilizing the availability and readiness of satellite imagery, we propose to improve the efficiency and accuracy of damage detection via image classification algorithms.

Instructions: 

To extract the dataset, please unzip the main file 'Post-hurricane.zip'. There will be 4 folders inside:

  1. train_another : the training data; 5000 images of each class
  2. validation_another: the validation data; 1000 images of each class
  3. test_another : the unbalanced test data; 8000/1000 images of damaged/undamaged classes
  4. test : the balanced test data; 1000 images of each class

All images are in JPEG format, the class label is the name of the super folder containing the images

Categories:
1604 Views

A quantitative understanding of how sensory signals are transformed into motor outputs places useful constraints on brain function and helps reveal the brain's underlying computations. Here we present over 8,000 animal hours of behavior recordings to investigate the nematode C. elegans' response to time-varying mechanosensory signals. We use a high-throughput optogenetic assay, video microscopy and automated behavior quantification.

Instructions: 

Approximately 2 TB of raw imaging data will be posted here shortly.

Categories:
540 Views