Computer Vision

The Dataset

We introduce a novel large-scale dataset for semi-supervised semantic segmentation in Earth Observation: the MiniFrance suite.

Categories:
5744 Views

This dataset was created from all Landsat-8 images from South America in the year 2018. More than 31 thousand images were processed (15 TB of data), and approximately on half of them active fire pixels were found. The Landsat-8 sensor has 30 meters of spatial resolution (1 panchromatic band of 15m), 16 bits of radiometric resolution and 16 days of temporal resolution (revisit). The images in our dataset are in TIFF (geotiff) format with 10 bands (excluding the 15m panchromatic band).

Categories:
5851 Views

These File is group by 7 different datasets with the task of salient object detect.Each folder is an open data set of SOD, which is composed of multiple JPG files. Each JPG picture corresponds to a annotation picture which exists in PNG format.

Categories:
486 Views

This dataset extends the Urban Semantic 3D (US3D) dataset developed and first released for the 2019 IEEE GRSS Data Fusion Contest (DFC19). We provide additional geographic tiles to supplement the DFC19 training data and also new data for each tile to enable training and validation of models to predict geocentric pose, defined as an object's height above ground and orientation with respect to gravity. We also add to the DFC19 data from Jacksonville, Florida and Omaha, Nebraska with new geographic tiles from Atlanta, Georgia.

Categories:
9152 Views

Cautionary traffic signs are of immense significance to traffic safety. In this study,  a robust and optimal real-time approach to recognize the Indian Cautionary Traffic Signs(ICTS) is proposed. ICTS are all triangles with a white backdrop, a red border, and a black pattern. A dataset of 34,000 real-time images has been acquired under various environmental conditions and categorized into 40 distinct classes. Pre-processing techniques are used to transform RGB images to Gray-scale images and enhance contrast in images for superior performance.

Categories:
9314 Views

Solving the external perception problem for autonomous vehicles and driver-assistance systems requires accurate and robust driving scene perception in both regularly-occurring driving scenarios (termed “common cases”) and rare outlier driving scenarios (termed “edge cases”). In order to develop and evaluate driving scene perception models at scale, and more importantly, covering potential edge cases from the real world, we take advantage of the MIT-AVT Clustered Driving Scene Dataset and build a subset for the semantic scene segmentation task.

Categories:
4413 Views

Semantic scene segmentation has primarily been addressed by forming representations of single images both with supervised and unsupervised methods. The problem of semantic segmentation in dynamic scenes has begun to recently receive attention with video object segmentation approaches. What is not known is how much extra information the temporal dynamics of the visual scene carries that is complimentary to the information available in the individual frames of the video.

Categories:
8038 Views

Synthetic Aperture Radar (SAR) images can be extensively informative owing to their resolution and availability. However, the removal of speckle-noise from these requires several pre-processing steps. In recent years, deep learning-based techniques have brought significant improvement in the domain of denoising and image restoration. However, further research has been hampered by the lack of availability of data suitable for training deep neural network-based systems. With this paper, we propose a standard synthetic data set for the training of speckle reduction algorithms.

Categories:
3338 Views

This is the data for paper "Environmental Context Prediction for Lower Limb Prostheses with Uncertainty Quantification" published on IEEE Transactions on Automation Science and Engineering, 2020. DOI: 10.1109/TASE.2020.2993399. For more details, please refer to https://research.ece.ncsu.edu/aros/paper-tase2020-lowerlimb. 

Categories:
1151 Views

As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on recognizing textures and materials in real-world images, which plays an important role in object recognition and scene understanding. Aiming at describing objects or scenes with more detailed information, we explore how to computationally characterize apparent or latent properties (e.g. surface smoothness) of materials, i.e., computational material characterization, which moves a step further beyond material recognition.

Categories:
865 Views

Pages