Semantic Segmentation

When fuel materials for high-temperature gas-cooled nuclear reactors are quantification tested, significant analysis is required to establish their stability under various proposed accident scenarios, as well as to assess degradation over time. Typically, samples are examined by lab assistants trained to capture micrograph images used to analyze the degradation of a material. Analysis of these micrographs still require manual intervention which is time-consuming and can introduce human-error.


CAD-EdgeTune dataset is acquired using a Husarion ROSbot 2.0 and ROSbot 2.0 Pro with the collection speed set to 5 frames per second from a suburban university environment. We may split the information into subgroups for noon, dusk, and dawn in order to depict our surroundings under various lighting situations. We have assembled 17 sequences totaling 8080 frames, of which 1619 have been manually analyzed using an open-source pixel annotation program. Since nearby photographs are highly similar to one another, we decide to annotate every five images.


We release MarsData-V2, a rock segmentation dataset of real Martian scenes for the training of deep networks, extended from our previously published MarsData. The raw unlabeled RGB images of MarsData-V2 are from here, which were collected by a Mastcam camera of the Curiosity rover on Mars between August 2012 and November 2018.


Research data associated with paper: A Semantic Segmentation Model for Lumbar MRI Images using Divergence Loss, comprising the python code, a trained model and empirical results. 


The Contest: Goals and Organization

The 2022 IEEE GRSS Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee, aims to promote research on semi-supervised learning. The overall objective is to build models that are able to leverage a large amount of unlabelled data while only requiring a small number of annotated training samples. The 2022 Data Fusion Contest will consist of two challenge tracks:

Track SLM:Semi-supervised Land Cover Mapping

Last Updated On: 
Mon, 03/07/2022 - 04:41

The dataset contains UAV imagery and fracture interpretation of rock outcrops acquired in Praia das Conchas, Cabo Frio, Rio de Janeiro, Brazil. Along with georeferenced .geotiff images, the dataset contains filtered 500 x 500 .png tiles containing only scenes with fracture data, along with .png binary masks for semantic segmentation and original georeferenced shapefile annotations. This data can be useful for segmentation and extraction of geological structures from UAV imagery, for evaluating computer vision methodologies or machine learning techniques.


This dataset extends the Urban Semantic 3D (US3D) dataset developed and first released for the 2019 IEEE GRSS Data Fusion Contest (DFC19). We provide additional geographic tiles to supplement the DFC19 training data and also new data for each tile to enable training and validation of models to predict geocentric pose, defined as an object's height above ground and orientation with respect to gravity. We also add to the DFC19 data from Jacksonville, Florida and Omaha, Nebraska with new geographic tiles from Atlanta, Georgia.


Endoscopy is a widely used clinical procedure for the early detection of cancers in hollow-organs such as oesophagus, stomach, and colon. Computer-assisted methods for accurate and temporally consistent localisation and segmentation of diseased region-of-interests enable precise quantification and mapping of lesions from clinical endoscopy videos which is critical for monitoring and surgical planning. Innovations have the potential to improve current medical practices and refine healthcare systems worldwide.

Last Updated On: 
Sat, 02/27/2021 - 05:11