Remote sensing of environment research has explored the benefits of using synthetic aperture radar imagery systems for a wide range of land and marine applications since these systems are not affected by weather conditions and therefore are operable both daytime and nighttime. The design of image processing techniques for  synthetic aperture radar applications requires tests and validation on real and synthetic images. The GRSS benchmark database supports the desing and analysis of algorithms to deal with SAR and PolSAR data.

Last Updated On: 
Tue, 11/12/2019 - 10:38
Citation Author(s): 
Nobre, R. H.; Rodrigues, F. A. A.; Rosa, R.; Medeiros, F.N.; Feitosa, R., Estevão, A.A., Barros, A.S.

These last decades, Earth Observation brought quantities of new perspectives from geosciences to human activity monitoring. As more data became available, artificial intelligence techniques led to very successful results for understanding remote sensing data. Moreover, various acquisition techniques such as Synthetic Aperture Radar (SAR) can also be used for problems that could not be tackled only through optical images. This is the case for weather-related disasters such as floods or hurricanes, which are generally associated with large clouds cover.

Instructions: 

The dataset is composed of 336 sequences corresponding to areas in West and South-East Africa, Middle-East, and Australia. Each time series is located in a given folder named with the sequence ID (0001... 0336).

Two json files, S1list.json and S2list.json are provided to describe respectively the Sentinel-1 and Sentinel-2 images.The keys are the total number of images in the sequence, the folder name, the geography of the observed area, and the description of each image in the series. The SAR images description contains also the URLs to download the images.Each image is described by its acquisition date, its label (FLOODING: boolean), a boolean (FULL-DATA-COVERAGE: boolean) indicating if the area is fully or partially imaged, and the file prefix. For SAR images the orbit (ASCENDING or DESCENDING) is also indicated.

The Sentinel-2 images were obtained from the Mediaeval 2019 Multimedia Satellite Task [1] and are provided with Level 2A atmospheric correction. For one acquisition, there are 12 single-channel raster images provided corresponding to the different spectral bands.

The Sentinel-1 images were added to the dataset. The images are provided with radiometric calibration and range doppler terrain correction based on the SRTM digital elevation model. For one acquisition, two raster images are available corresponding to the polarimetry channels VV and VH.

The original dataset was split into 267 sequences for the train and 67 sequences for the test. Here all sequences are in the same folder.

 

To use this dataset please cite the following papers:

Flood Detection in Time Series of Optical and SAR Images, C. Rambour,N. Audebert,E. Koeniguer,B. Le Saux,  and M. Datcu, ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020, 1343--1346

The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop

 

This dataset contains modified Copernicus Sentinel data [2018-2019], processed by ESA.

[1] The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop

Categories:
150 Views

The dataset contains two sets of planetary models used in the Reproducibility Challenge Student Cluster Competition at the SC19 conference. During this challenge the competitors reproduced parts of the SC18 paper: "Computing planetary interior normal modes with a highly parallel polynomial filtering eigensolver." by Shi, Jia, et al. (https://doi.org/10.1109/SC.2018.00074)

 

Categories:
53 Views

This multispectral remote sensing image data contained pixels of size (1024 x 1024) for the region around Kolkata city in India and was obtained with LISS-III sensor. There are four spectral bands, i.e., two from visible spectrum (green and red) and two from the infrared spectrum (near-infrared and shortwave infrared). The spatial resolution and spectral variation over the wavelength are 23.5m and 0.52 - 1.7 μm, respectively.

Categories:
155 Views

The Dataset

We introduce a novel large-scale dataset for semi-supervised semantic segmentation in Earth Observation: the MiniFrance suite.

Instructions: 

##################################################

The MiniFrance Suite

##################################################

Authors:

Javiera Castillo Navarro, javiera.castillo_navarro@onera.fr

Bertrand Le Saux, bls@ieee.org

Alexandre Boulch, alexandre.boulch@valeo.com

Nicolas Audebert, nicolas.audebert@cnam.fr

Sébastien Lefèvre, sebastien.lefevre@irisa.fr

##################################################

About:

This dataset contains very high resolution RGB aerial images over 16 cities and their surroundings from different regions in France, obtained from IGN's BD ORTHO database (images from 2012 to 2014). Pixel-level land use and land cover annotations are provided, generated by rasterizing Urban Atlas 2012.

##################################################

This dataset is partitioned in three parts, defined by conurbations:

1. Labeled training data: data over Nice and Nantes/Saint Nazaire.

2. Unlabeled training data: data over Le Mans, Brest, Lorient, Caen, Calais/Dunkerque and Saint-Brieuc.

3. Test data: data over Marseille/Martigues, Rennes, Angers, Quimper, Vannes, Clermont-Ferrand, Cherbourg, Lille.

Due to the large-scale nature of the dataset, it is divided in several files to download:

- Images for the labeled training partition: contains RGB aerial images for french departments in the labeled training partition.

- Images for the unlabeled training partition (parts 1, 2 and 3): contain RGB aerial images for french departments in the unlabeled training partition.

- Images for the test partition (parts 1, 2, 3 and 4): contain RGB aerial images for french departments in the partition reserved for evaluation.

- Labels for the labeled partition

- Lists of files by conurbation and partition: contain .txt files that list all images included by city.

Land use maps are available for all images in the labeled training partition of the dataset. We consider here Urban Atlas classes at the second hierarchical level. Available classes are:

- 0: No information

- 1: Urban fabric

- 2: Industrial, commercial, public, military, private and transport units

- 3: Mine, dump and contruction sites

- 4: Artificial non-agricultural vegetated areas

- 5: Arable land (annual crops)

- 6: Permanent crops

- 7: Pastures

- 8: Complex and mixed cultivation patterns

- 9: Orchards at the fringe of urban classes

- 10: Forests

- 11: Herbaceous vegetation associations

- 12: Open spaces with little or no vegetation

- 13: Wetlands

- 14: Water

- 15: Clouds and shadows

##################################################

Citation: If you use this dataset for your work, please use the following citation:

@article{castillo2020minifrance,
title={{Semi-Supervised Semantic Segmentation in Earth Observation: The MiniFrance Suite, Dataset Analysis and Multi-task Network Study}},
author={Castillo-Navarro, Javiera and Audebert, Nicolas and Boulch, Alexandre and {Le Saux}, Bertrand and Lef{\`e}vre, S{\'e}bastien},
journal={Under review.},
year={2020}
}

##################################################

Copyright:

The images in this dataset are released under IGN's "licence ouverte". More information can be found at http://www.ign.fr/institut/activites/lign-lopen-data

The maps used to generate the labels in this dataset come from the Copernicus program, and as such are subject to the terms described here: https://land.copernicus.eu/local/urban-atlas/urban-atlas-2012?tab=metadata

Categories:
123 Views

This dataset was created from all Landsat-8 images from South America in the year 2018. More than 31 thousand images were processed (15 TB of data), and approximately on half of them active fire pixels were found. The Landsat-8 sensor has 30 meters of spatial resolution (1 panchromatic band of 15m), 16 bits of radiometric resolution and 16 days of temporal resolution (revisit). The images in our dataset are in TIFF (geotiff) format with 10 bands (excluding the 15m panchromatic band).

Instructions: 

The images in our dataset are in georeferenced TIFF (geotiff) format with 10 bands. We cropped the original Landsat-8 scenes (with ~7,600 x 7,600 pixels) into image patches with 128 x 128 pixels by using a stride overlap of 64 pixels (vertical and horizontal). The masks are in binary format where True (1) represents fire and False (0) represents background and they were generated from the conditions set by Schroeder et al. (2016). We used the Schroeder conditions to process each patch, producing over 1 million patches with at least one fire pixel and the same amount of patches with no fire pixels, randomly selected from the original images.

The dataset is organized as follow. 

It is divided into South American regions for easy downloading. For each region of South America we have a zip file for images of active fire, its masks, and non-fire images. For example:

 - Uruguay-fire.zip

 - Uruguay-mask.zip

 - Uruguay-nonfire.zip

Within each South American region zip files there are the corresponding zip files to each Landsat-8 WRS (Worldwide Reference System). For example:

- Uruguay-fire.zip;

      - 222083.zip

      - 222084.zip

      - 223082.zip

      - 223083.zip

      - 223084.zip

      - 224082.zip

      - 224083.zip

      - 224084.zip

      - 225081.zip

      - 225082.zip

      - 225083.zip

      - 225084.zip

Within each of these Landsat-8 WRS zip files there are all the corresponding 128x128 image patches for the year 2018. 

 

Categories:
319 Views

This dataset extends the Urban Semantic 3D (US3D) dataset developed and first released for the 2019 IEEE GRSS Data Fusion Contest (DFC19). We provide additional geographic tiles to supplement the DFC19 training data and also new data for each tile to enable training and validation of models to predict geocentric pose, defined as an object's height above ground and orientation with respect to gravity. We also add to the DFC19 data from Jacksonville, Florida and Omaha, Nebraska with new geographic tiles from Atlanta, Georgia.

Instructions: 

Detailed information about the data content, organization, and file formats is provided in the README files. For image data, individual TAR files for training and validation are provided for each city. Extra training data is also provided in separate TAR files. For point cloud data, individual ZIP files are provided for each city from DFC19. These include the original DFC19 training and validation point clouds with full UTM coordinates to enable experiments requiring geolocation.

Original DFC19 dataset:

https://ieee-dataport.org/open-access/data-fusion-contest-2019-dfc2019

We added new reference data to this extended US3D dataset to enable training and validation of models to predict geocentric pose, defined as an object's height above ground and orientation with respect to gravity. For details, please see our CVPR paper.

CVPR paper on geocentric pose:

http://openaccess.thecvf.com/content_CVPR_2020/papers/Christie_Learning_...

Source Data Attribution

All data used to produce the extended US3D dataset is publicly sourced. Data for DFC19 was derived from public satellite images released for IARPA CORED. New data for Atlanta was derived from public satellite images released for SpaceNet 4. Commercial satellite images were provided courtesy of DigitalGlobe. U. S. Cities LiDAR and vector data were made publicly available by the Homeland Security Infrastructure Program. 

CORE3D source data: https://spacenetchallenge.github.io/datasets/Core_3D_summary.html

SpaceNet 4 source data: https://spacenetchallenge.github.io/datasets/spacenet-OffNadir-summary.html

Test Sets

Validation data from DFC19 is extended here to include additional data for each tile. Test data is not provided for the DFC19 cities or for Atlanta. Test sets are available for the DFC19 challenge problems on CodaLab leaderboards. We plan to make test sets for all cities available for the geocentric pose problem in the near future. 

Single-view semantic 3D: https://competitions.codalab.org/competitions/20208

Pairwise semantic stereo: https://competitions.codalab.org/competitions/20212

Multi-view semantic stereo: https://competitions.codalab.org/competitions/20216

3D point cloud classification: https://competitions.codalab.org/competitions/20217

References

If you use the extended US3D dataset, please cite the following papers:

G. Christie, R. Munoz, K. Foster, S. Hagstrom, G. D. Hager, and M. Z. Brown, "Learning Geocentric Object Pose in Oblique Monocular Images," Proc. of Computer Vision and Pattern Recognition, 2020.

B. Le Saux, N. Yokoya, R. Hansch, and M. Brown, "2019 IEEE GRSS Data Fusion Contest: Large-Scale Semantic 3D Reconstruction [Technical Committees]", IEEE Geoscience and Remote Sensing Magazine, 2019.

M. Bosch, K. Foster, G. Christie, S. Wang, G. D. Hager, and M. Brown, "Semantic Stereo for Incidental Satellite Images," Proc. of Winter Applications of Computer Vision, 2019.

Categories:
337 Views

This dataset includes the following data in supporting the submitted manuscript 'Datacube Parametrization-Based Model for Rough Surface Polarimetric Bistatic Scattering' to IEEE Transactions on Geoscience and Remote Sensing. 

  • LUT of coefficient c from fitting the contour level bounds
  • LUT of coefficient c from fitting the contour center shifts
  • specular scattering coefficients from the SEBCM simulated datacube
Categories:
24 Views

Depths to the various subsurface anomalies have been the primary interest in all the applications of magnetic methods of geophysical prospection. Depths to the subsurface geologic features of interest are more valuable and superior to all other properties in any correct subsurface geologic structural interpretations.

Instructions: 

The Neural Network Pattern Recognition help to select the appropriate data sets, create and train the network, and evaluate its performance using the cross-entropy and convolution matrices in MATLAB with fusion python. The Neural Network utilizes a Two-Layer feed-forward network to solve the pattern recognition problem with a six inputs data (i.e., the SI values), a Hidden Layer and a SoftMax Output Layer Neurons. The method excellently classified vector attributes when sufficient neuron in the hidden layer is selected. In this study, a six inputs data from the various SI values obtained was used in one hundred, (100) hidden layers of the neutrons, and weights combined with a six layers output of neutrons, and weights to generate the six-final output that represent each of the SI values depths as shown.

Categories:
117 Views

Pages