Computer Vision

Semantic scene segmentation has primarily been ad- dressed by forming representations of single images both with supervised and unsupervised methods. The problem of semantic segmentation in dynamic scenes has begun to recently receive attention with video object segmentation approaches. What is not known is how much extra informa- tion the temporal dynamics of the visual scene carries that is complimentary to the information available in the indi- vidual frames of the video.

16 views
  • Computer Vision
  • Last Updated On: 
    Tue, 06/02/2020 - 11:33

    Synthetic Aperture Radar (SAR) images can be extensively informative owing to their resolution and availability. However, the removal of speckle-noise from these requires several pre-processing steps. In recent years, deep learning-based techniques have brought significant improvement in the domain of denoising and image restoration. However, further research has been hampered by the lack of availability of data suitable for training deep neural network-based systems. With this paper, we propose a standard synthetic data set for the training of speckle reduction algorithms.

    91 views
  • Computer Vision
  • Last Updated On: 
    Mon, 06/01/2020 - 08:46

    This is the data for paper "Environmental Context Prediction for Lower Limb Prostheses with Uncertainty Quantification" published on IEEE Transactions on Automation Science and Engineering, 2020. DOI: 10.1109/TASE.2020.2993399. For more details, please refer to https://research.ece.ncsu.edu/aros/paper-tase2020-lowerlimb. 

    93 views
  • Artificial Intelligence
  • Last Updated On: 
    Sun, 05/24/2020 - 08:57

    As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on recognizing textures and materials in real-world images, which plays an important role in object recognition and scene understanding. Aiming at describing objects or scenes with more detailed information, we explore how to computationally characterize apparent or latent properties (e.g. surface smoothness) of materials, i.e., computational material characterization, which moves a step further beyond material recognition.

    81 views
  • Artificial Intelligence
  • Last Updated On: 
    Wed, 05/20/2020 - 01:38

    As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on recognizing textures and materials in real-world images, which plays an important role in object recognition and scene understanding. Aiming at describing objects or scenes with more detailed information, we explore how to computationally characterize apparent or latent properties (e.g. surface smoothness) of materials, i.e., computational material characterization, which moves a step further beyond material recognition.

    37 views
  • Artificial Intelligence
  • Last Updated On: 
    Wed, 05/20/2020 - 00:25

    This aerial image dataset consists of more than 22,000 independent buildings extracted from aerial images with 0.0075 m spatial resolution and 450 km^2 covering in Christchurch, New Zealand. The most parts of aerial images are down-sampled to 0.3 m ground resolution and cropped into 8,189 non-overlapping tiles with 512* 512. These tiles make up the whole dataset. They are split into three parts: 4,736 tiles for training, 1,036 tiles for validation and 2,416 tiles for testing.

    16 views
  • Computer Vision
  • Last Updated On: 
    Fri, 05/08/2020 - 19:56

    This Dataset contains "Pristine" and "Distorted" videos recorded in different places. The 

    distortions with which the videos were recorded are: "Focus", "Exposure" and "Focus + Exposure". 

    Those three with low (1), medium (2) and high (3) levels, forming a total of 10 conditions 

    (including Pristine videos). In addition, distorted videos were exported in three different 

    qualities according to the H.264 compression format used in the DIGIFORT software, which were: 

    High Quality (HQ, H.264 at 100%), Medium Quality (MQ, H.264 at 75%) and Low Quality 

    124 views
  • Computer Vision
  • Last Updated On: 
    Thu, 05/07/2020 - 19:27

    PRIME-FP20 dataset is established for development and evaluation of retinal vessel segmentation algorithms in ultra-widefield fundus photography. PRIME-FP20 provides 15 high-resolution ultra-widefield fundus photography images acquired using the Optos 200Tx camera (Optos plc, Dunfermline, United Kingdom), the corresponding labeled binary vessel maps, and the corresponding binary masks for the FOV of the images.

    127 views
  • Computer Vision
  • Last Updated On: 
    Sun, 05/03/2020 - 13:13

    Dataset asscociated with a paper to appear in IEEE Transactions on Pattern Analysis and Machine Intelligence

    "The perils and pitfalls of block design for EEG classification experiments"

    The paper has been accepted and is in production.

    We will upload the dataset when the paper is published.

    This is a placeholder so we can obtain a DOI to include in the paper.

    64 views
  • Artificial Intelligence
  • Last Updated On: 
    Fri, 04/24/2020 - 16:39

     

    29 views
  • Computer Vision
  • Last Updated On: 
    Mon, 05/18/2020 - 19:37

    Pages