Computer Vision

Synthetic Aperture Radar (SAR) images can be extensively informative owing to their resolution and availability. However, the removal of speckle-noise from these requires several pre-processing steps. In recent years, deep learning-based techniques have brought significant improvement in the domain of denoising and image restoration. However, further research has been hampered by the lack of availability of data suitable for training deep neural network-based systems. With this paper, we propose a standard synthetic data set for the training of speckle reduction algorithms.

Categories:
3866 Views

This is the data for paper "Environmental Context Prediction for Lower Limb Prostheses with Uncertainty Quantification" published on IEEE Transactions on Automation Science and Engineering, 2020. DOI: 10.1109/TASE.2020.2993399. For more details, please refer to https://research.ece.ncsu.edu/aros/paper-tase2020-lowerlimb. 

Categories:
1181 Views

As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on recognizing textures and materials in real-world images, which plays an important role in object recognition and scene understanding. Aiming at describing objects or scenes with more detailed information, we explore how to computationally characterize apparent or latent properties (e.g. surface smoothness) of materials, i.e., computational material characterization, which moves a step further beyond material recognition.

Categories:
948 Views

As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on recognizing textures and materials in real-world images, which plays an important role in object recognition and scene understanding. Aiming at describing objects or scenes with more detailed information, we explore how to computationally characterize apparent or latent properties (e.g. surface smoothness) of materials, i.e., computational material characterization, which moves a step further beyond material recognition.

Categories:
275 Views

This aerial image dataset consists of more than 22,000 independent buildings extracted from aerial images with 0.0075 m spatial resolution and 450 km^2 covering in Christchurch, New Zealand. The most parts of aerial images are down-sampled to 0.3 m ground resolution and cropped into 8,189 non-overlapping tiles with 512* 512. These tiles make up the whole dataset. They are split into three parts: 4,736 tiles for training, 1,036 tiles for validation and 2,416 tiles for testing.

Categories:
381 Views

This Dataset contains "Pristine" and "Distorted" videos recorded in different places. The 

distortions with which the videos were recorded are: "Focus", "Exposure" and "Focus + Exposure". 

Those three with low (1), medium (2) and high (3) levels, forming a total of 10 conditions 

(including Pristine videos). In addition, distorted videos were exported in three different 

qualities according to the H.264 compression format used in the DIGIFORT software, which were: 

High Quality (HQ, H.264 at 100%), Medium Quality (MQ, H.264 at 75%) and Low Quality 

Categories:
1152 Views

The PRIME-FP20 dataset is established for development and evaluation of retinal vessel segmentation algorithms in ultra-widefield (UWF) fundus photography (FP). PRIME-FP20 provides 15 high-resolution UWF FP images acquired using the Optos 200Tx camera (Optos plc, Dunfermline, United Kingdom), the corresponding labeled binary vessel maps, and the corresponding binary masks for the valid data region for the images. For each UWF FP image, a concurrently captured UWF fluorescein angiography (FA) is also included. 

Categories:
3510 Views

Dataset asscociated with a paper in IEEE Transactions on Pattern Analysis and Machine Intelligence

"The perils and pitfalls of block design for EEG classification experiments"

DOI: 10.1109/TPAMI.2020.2973153

If you use this code or data, please cite the above paper.

Categories:
1658 Views

Cityscapes a new large-scale dataset that contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations of 5 000 frames in addition to a larger set of 20 000 weakly annotated frames. The dataset is thus an order of magnitude larger than similar previous attempts. Details on annotated classes and examples of our annotations are available at https://www.cityscapes-dataset.com/dataset-overview/#features.

 

Categories:
181 Views

Pages