The Contest: Goals and Organisation

 The 2019 Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (GRSS), the Johns Hopkins University (JHU), and the Intelligence Advanced Research Projects Activity (IARPA), aimed to promote research in semantic 3D reconstruction and stereo using machine intelligence and deep learning applied to satellite images.

Instructions: 
Categories:
2894 Views

The Contest: Goals and Organization

 

The 2017 IEEE GRSS Data Fusion Contest, organized by the IEEE GRSS Image Analysis and Data Fusion Technical Committee, aimed at promoting progress on fusion and analysis methodologies for multisource remote sensing data.

 

Instructions: 

 

Overview

The 2017 Data Fusion Contest will consist in a classification benchmark. The task to perform is classification of land use (more precisely, Local Climate Zones or LCZ) in various urban environments. Several cities have been selected all over the world to test the ability of both LCZ prediction and domain adaptation. Input data are multi-temporal, multi-source and multi-mode (image and semantic layers). 5 cities are considered for training: Berlin, Hong Kong, Paris, Rome and Sao Paulo.

Content

Each city folder contains:grid/        sampling gridlandsat_8/    Landsat 8 images at various dates (resampled at 100m res., split in selected bands)lcz/        Local Climate Zones as rasters (see below)osm_raster/    Rasters with areas (buildings, land-use, water) derived from OpenStreetMap layersosm_vector/    Vector data with OpenStreetMap zones and linessentinel_2/    Sentinel2 image (resampled at 100m res., split in selected bands)

 

Local Climate Zones

The lcz/ folder contains:`<city>_lcz_GT.tif`: The ground-truth for local climate zones, as a raster. It is single-band, in byte format. The pixel values range from 1 to 17 (maximum number of classes). Unclassified pixels have 0 value.`<city>_lcz_col.tif`: Color, georeferenced LCZ map, for visualization convenience only.Class nembers are the following:10 urban LCZs corresponding to various built types:

  • 1. Compact high-rise;
  • 2. Compact midrise;
  • 3. Compact low-rise;
  • 4. Open high-rise;
  • 5. Open midrise;
  • 6. Open low-rise;
  • 7. Lightweight low-rise;
  • 8. Large low-rise;
  • 9. Sparsely built;
  • 10. Heavy industry.

7 rural LCZs corresponding to various land cover types:

  • 11. Dense trees;
  • 12. Scattered trees;
  • 13. Bush and scrub;
  • 14. Low plants;
  • 15. Bare rock or paved;
  • 16. Bare soil or sand;
  • 17. Water

 

More...

More info:http://www.grss-ieee.org/community/technical-committees/data-fusion/data-fusion-contest/

Discuss:https://www.linkedin.com/groups/IEEE-Geoscience-Remote-Sensing-Society-3678437

 

Acknowledgments

The 2017 IEEE GRSS Data Fusion Contest is organized by the Image Analysis and Data Fusion Technical Committee of IEEE GRSSLandsat 8 data available from the U.S. Geological Survey (https://www.usgs.gov/).OpenStreetMap Data © OpenStreetMap contributors, available under the Open Database Licence - http://www.openstreetmap.org/copyright. Original Copernicus Sentinel Data 2016 available from  the European Space Agency (https://sentinel.esa.int).The Contest is being organized in collaboration with the WUDAPT (http://www.wudapt.org/) and GeoWIKI (http://geo-wiki.org/) initiatives. The IADF TC chairs would like to thank the organizers and the IEEE GRSS for continuously supporting the annual Data Fusion Contest through funding and resources.

Categories:
550 Views

The Data Fusion Contest 2016: Goals and Organization

The 2016 IEEE GRSS Data Fusion Contest, organized by the IEEE GRSS Image Analysis and Data Fusion Technical Committee, aimed at promoting progress on fusion and analysis methodologies for multisource remote sensing data.

New multi-source, multi-temporal data including Very High Resolution (VHR) multi-temporal imagery and video from space were released. First, VHR images (DEIMOS-2 standard products) acquired at two different dates, before and after orthorectification:

Instructions: 

 

After unzip, each directory contains:

  • original GeoTiff for panchromatic (VHR) and multispectral (4bands) images,

  • quick-view image for both in png format,

  • capture parameters (RPC file).

 

Categories:
836 Views

The recent interest in using deep learning for seismic interpretation tasks, such as facies classification, has been facing a significant obstacle, namely the absence of large publicly available annotated datasets for training and testing models. As a result, researchers have often resorted to annotating their own training and testing data. However, different researchers may annotate different classes, or use different train and test splits.

Instructions: 

#Basic Intructions for usage

Make sure you have the following folder structure in the data directory after you unzip the file:

data

├── splits

├── test_once

│   ├── test1_labels.npy

│   ├── test1_seismic.npy

│   ├── test2_labels.npy

│   └── test2_seismic.npy

└── train

    ├── train_labels.npy

    └── train_seismic.npy

The train and test data are in NumPy .npy format ideally suited for Python. You can open these file in Python as such: 

import numpy as np

train_seismic = np.load('data/train/train_seismic.npy')

Make sure the testing data is only used once after all models are trained. Using the test set multiple times makes it a validation set.

We also provide fault planes, and the raw horizons that were used to generate the data volumes in addition to the processed data volumes before splitting to training and testing.

# References:

1- Netherlands Offshore F3 block. [Online]. Available: https://opendtect.org/osr/pmwiki.php/Main/Netherlands OffshoreF3BlockComplete4GB

2- Alaudah, Yazeed, et al. "A machine learning benchmark for facies classification." Interpretation 7.3 (2019): 1-51.

 

Categories:
884 Views

This dataset was developed at the School of Electrical and Computer Engineering (ECE) at the Georgia Institute of Technology as part of the ongoing activities at the Center for Energy and Geo-Processing (CeGP) at Georgia Tech and KFUPM. LANDMASS stands for “LArge North-Sea Dataset of Migrated Aggregated Seismic Structures”. This dataset was extracted from the North Sea F3 block under the Creative Commons license (CC BY-SA 3.0).

Instructions: 

The LANDMASS database includes two different datasets. The first, denoted LANDMASS-1, contains 17667 small “patches” of size 99x99 pixels. it includes 9385 Horizon patches, 5140 chaotic patches, 1251 Fault patches, and 1891 Salt Dome patches. The images in this database have values in the range [-1,1]. The second dataset, denoted LANDMASS-2, contains 4000 images. Each image is of size 150x300 pixels and normalized to values in the range [0,1]. Each one of the four classes has 1000 images. Sample images from each database for each class can be found under the /samples file.

Categories:
263 Views

The Dataset

The Onera Satellite Change Detection dataset addresses the issue of detecting changes between satellite images from different dates.

Instructions: 

Onera Satellite Change Detection dataset

 

##################################################

Authors: Rodrigo Caye Daudt, rodrigo.daudt@onera.fr

Bertrand Le Saux, bls@ieee.org

Alexandre Boulch, alexandre.boulch@valeo.ai

Yann Gousseau, yann.gousseau@telecom-paristech.fr

 

##################################################

About: This dataset contains registered pairs of 13-band multispectral satellite images obtained by the Sentinel-2 satellites of the Copernicus program. Pixel-level urban change groundtruth is provided. In case of discrepancies in image size, the older images with resolution of 10m per pixel is used. Images vary in spatial resolution between 10m, 20m and 60m. For more information, please refer to Sentinel-2 documentation.

 

For each location, folders imgs_1_rect and imgs_2_rect contain the same images as imgs_1 and imgs_2 resampled at 10m resolution and cropped accordingly for ease of use. The proposed split into train and test images is contained in the train.txt and test.txt files.

For downloading and cropping the images, the Medusa toolbox was used: https://github.com/aboulch/medusa_tb

For precise registration of the images, the GeFolki toolbox was used. https://github.com/aplyer/gefolki

 

##################################################

Labels: The train labels are available in two formats, a .png visualization image and a .tif label image. In the png image, 0 means no change and 255 means change. In the tif image, 0 means no change and 1 means change.

<ROOT_DIR>//cm/ contains: - cm.png - -cm.tif

Please note that prediction images should be formated as the -cm.tif rasters for upload and evaluation on DASE (http://dase.grss-ieee.org/).

(Update June 2020) Alternatively, you can use the test labels which are now provided in a separate archive, and compute standard metrics using the python notebook provided in this repo, along with a fulll script to train and classify fully-convolutional networks for change detection: https://github.com/rcdaudt/fully_convolutional_change_detection 

 

##################################################

Citation: If you use this dataset for your work, please use the following citation:

@inproceedings{daudt-igarss18,

author = {{Caye Daudt}, R. and {Le Saux}, B. and Boulch, A. and Gousseau, Y.},

title = {Urban Change Detection for Multispectral Earth Observation Using Convolutional Neural Networks},

booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS'2018)},

venue = {Valencia, Spain},

month = {July},

year = {2018},

}

 

##################################################

Copyright: Sentinel Images: This dataset contains modified Copernicus data from 2015-2018. Original Copernicus Sentinel Data available from the European Space Agency (https://sentinel.esa.int).

Change labels: Change maps are released under Creative-Commons BY-NC-SA. For commercial purposes, please contact the authors.

 

Categories:
9194 Views

In controlled source electromagnetic (CSEM) modeling with well casings, it is common to assume that the current is flowing vertically in each casing, due to the large conductivity contrast between casings and their host media. This assumption makes the integration of the tensor Green's function relating the induced fields to source currents simple, since only the z-z component of the tensor needs to be considered. However, in practice, it can be improper to neglect the horizontal current effects in the casing without a close examination.

Categories:
31 Views

This is the data Archive for Zhang, et al., “A Geophysical Model Function for S-band Reflectometry of Ocean Surface Winds in Tropical Cyclones,” accepted by Geophysical Research Letters. This data set was generated from twelve (12) days of airborne S-band (2.3 GHz) reflectometry data collected during the 2014 hurricane season between 2 July 2014 and 17 September 2014. Cross-correlations between the direct and reflected S-band signals, commonly referred to as the “waveform” or delay-Doppler map (DDM) are provided with corresponding aircraft time and position data.

Instructions: 

Please refer to attached file "ZhangGRL_archive_submit_format.pdf" for a description of the data format and units. 

Categories:
190 Views

The benchmark dataset  are consisted of 2,413 three-channel RGB images obtained from Google Earth satellite images and AID dataset.

Categories:
278 Views

Infrared imaging from aerial platforms can be used to detect landmines and minefields remotely and can save many lives. This dataset contains thermal images of buried and surface landmines. The images were recorded from a fixed camera for 24 hours with 15-minute intervals. DM-11 type anti-personnel landmines were used. This dataset is available for landmine detection research.

Instructions: 

Instructions are given in the attached pdf file.

 

Categories:
732 Views

Pages