These last decades, Earth Observation brought quantities of new perspectives from geosciences to human activity monitoring. As more data became available, artificial intelligence techniques led to very successful results for understanding remote sensing data. Moreover, various acquisition techniques such as Synthetic Aperture Radar (SAR) can also be used for problems that could not be tackled only through optical images. This is the case for weather-related disasters such as floods or hurricanes, which are generally associated with large clouds cover.

Instructions: 

The dataset is composed of 336 sequences corresponding to areas in West and South-East Africa, Middle-East, and Australia. Each time series is located in a given folder named with the sequence ID (0001... 0336).

Two json files, S1list.json and S2list.json are provided to describe respectively the Sentinel-1 and Sentinel-2 images.The keys are the total number of images in the sequence, the folder name, the geography of the observed area, and the description of each image in the series. The SAR images description contains also the URLs to download the images.Each image is described by its acquisition date, its label (FLOODING: boolean), a boolean (FULL-DATA-COVERAGE: boolean) indicating if the area is fully or partially imaged, and the file prefix. For SAR images the orbit (ASCENDING or DESCENDING) is also indicated.

The Sentinel-2 images were obtained from the Mediaeval 2019 Multimedia Satellite Task [1] and are provided with Level 2A atmospheric correction. For one acquisition, there are 12 single-channel raster images provided corresponding to the different spectral bands.

The Sentinel-1 images were added to the dataset. The images are provided with radiometric calibration and range doppler terrain correction based on the SRTM digital elevation model. For one acquisition, two raster images are available corresponding to the polarimetry channels VV and VH.

The original dataset was split into 269 sequences for the train and 68 sequences for the test. Here all sequences are in the same folder.

 

To use this dataset please cite the following papers:

Flood Detection in Time Series of Optical and SAR Images, C. Rambour,N. Audebert,E. Koeniguer,B. Le Saux,  and M. Datcu, ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020, 1343--1346

The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop

 

This dataset contains modified Copernicus Sentinel data [2018-2019], processed by ESA.

[1] The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop

Categories:
1685 Views

The dataset contains two sets of planetary models used in the Reproducibility Challenge Student Cluster Competition at the SC19 conference. During this challenge the competitors reproduced parts of the SC18 paper: "Computing planetary interior normal modes with a highly parallel polynomial filtering eigensolver." by Shi, Jia, et al. (https://doi.org/10.1109/SC.2018.00074)

 

Categories:
186 Views

This multispectral remote sensing image data contained pixels of size (1024 x 1024) for the region around Kolkata city in India and was obtained with LISS-III sensor. There are four spectral bands, i.e., two from visible spectrum (green and red) and two from the infrared spectrum (near-infrared and shortwave infrared). The spatial resolution and spectral variation over the wavelength are 23.5m and 0.52 - 1.7 μm, respectively.

Categories:
378 Views

The Dataset

We introduce a novel large-scale dataset for semi-supervised semantic segmentation in Earth Observation: the MiniFrance suite.

Instructions: 

##################################################

The MiniFrance Suite

##################################################

Authors:

Javiera Castillo Navarro, javiera.castillo_navarro@onera.fr

Bertrand Le Saux, bls@ieee.org

Alexandre Boulch, alexandre.boulch@valeo.com

Nicolas Audebert, nicolas.audebert@cnam.fr

Sébastien Lefèvre, sebastien.lefevre@irisa.fr

##################################################

About:

This dataset contains very high resolution RGB aerial images over 16 cities and their surroundings from different regions in France, obtained from IGN's BD ORTHO database (images from 2012 to 2014). Pixel-level land use and land cover annotations are provided, generated by rasterizing Urban Atlas 2012.

##################################################

This dataset is partitioned in three parts, defined by conurbations:

1. Labeled training data: data over Nice and Nantes/Saint Nazaire.

2. Unlabeled training data: data over Le Mans, Brest, Lorient, Caen, Calais/Dunkerque and Saint-Brieuc.

3. Test data: data over Marseille/Martigues, Rennes, Angers, Quimper, Vannes, Clermont-Ferrand, Cherbourg, Lille.

Due to the large-scale nature of the dataset, it is divided in several files to download:

- Images for the labeled training partition: contains RGB aerial images for french departments in the labeled training partition.

- Images for the unlabeled training partition (parts 1, 2 and 3): contain RGB aerial images for french departments in the unlabeled training partition.

- Images for the test partition (parts 1, 2, 3 and 4): contain RGB aerial images for french departments in the partition reserved for evaluation.

- Labels for the labeled partition

- Lists of files by conurbation and partition: contain .txt files that list all images included by city.

Land use maps are available for all images in the labeled training partition of the dataset. We consider here Urban Atlas classes at the second hierarchical level. Available classes are:

- 0: No information

- 1: Urban fabric

- 2: Industrial, commercial, public, military, private and transport units

- 3: Mine, dump and contruction sites

- 4: Artificial non-agricultural vegetated areas

- 5: Arable land (annual crops)

- 6: Permanent crops

- 7: Pastures

- 8: Complex and mixed cultivation patterns

- 9: Orchards at the fringe of urban classes

- 10: Forests

- 11: Herbaceous vegetation associations

- 12: Open spaces with little or no vegetation

- 13: Wetlands

- 14: Water

- 15: Clouds and shadows

##################################################

Citation: If you use this dataset for your work, please use the following citation:

@article{castillo2020minifrance,
title={{Semi-Supervised Semantic Segmentation in Earth Observation: The MiniFrance Suite, Dataset Analysis and Multi-task Network Study}},
author={Castillo-Navarro, Javiera and Audebert, Nicolas and Boulch, Alexandre and {Le Saux}, Bertrand and Lef{\`e}vre, S{\'e}bastien},
journal={Under review.},
year={2020}
}

##################################################

Copyright:

The images in this dataset are released under IGN's "licence ouverte". More information can be found at http://www.ign.fr/institut/activites/lign-lopen-data

The maps used to generate the labels in this dataset come from the Copernicus program, and as such are subject to the terms described here: https://land.copernicus.eu/local/urban-atlas/urban-atlas-2012?tab=metadata

Categories:
466 Views

This dataset was created from all Landsat-8 images from South America in the year 2018. More than 31 thousand images were processed (15 TB of data), and approximately on half of them active fire pixels were found. The Landsat-8 sensor has 30 meters of spatial resolution (1 panchromatic band of 15m), 16 bits of radiometric resolution and 16 days of temporal resolution (revisit). The images in our dataset are in TIFF (geotiff) format with 10 bands (excluding the 15m panchromatic band).

Instructions: 

The images in our dataset are in georeferenced TIFF (geotiff) format with 10 bands. We cropped the original Landsat-8 scenes (with ~7,600 x 7,600 pixels) into image patches with 128 x 128 pixels by using a stride overlap of 64 pixels (vertical and horizontal). The masks are in binary format where True (1) represents fire and False (0) represents background and they were generated from the conditions set by Schroeder et al. (2016). We used the Schroeder conditions to process each patch, producing over 1 million patches with at least one fire pixel and the same amount of patches with no fire pixels, randomly selected from the original images.

The dataset is organized as follow. 

It is divided into South American regions for easy downloading. For each region of South America we have a zip file for images of active fire, its masks, and non-fire images. For example:

 - Uruguay-fire.zip

 - Uruguay-mask.zip

 - Uruguay-nonfire.zip

Within each South American region zip files there are the corresponding zip files to each Landsat-8 WRS (Worldwide Reference System). For example:

- Uruguay-fire.zip;

      - 222083.zip

      - 222084.zip

      - 223082.zip

      - 223083.zip

      - 223084.zip

      - 224082.zip

      - 224083.zip

      - 224084.zip

      - 225081.zip

      - 225082.zip

      - 225083.zip

      - 225084.zip

Within each of these Landsat-8 WRS zip files there are all the corresponding 128x128 image patches for the year 2018. 

 

Categories:
1297 Views

This dataset extends the Urban Semantic 3D (US3D) dataset developed and first released for the 2019 IEEE GRSS Data Fusion Contest (DFC19). We provide additional geographic tiles to supplement the DFC19 training data and also new data for each tile to enable training and validation of models to predict geocentric pose, defined as an object's height above ground and orientation with respect to gravity. We also add to the DFC19 data from Jacksonville, Florida and Omaha, Nebraska with new geographic tiles from Atlanta, Georgia.

Instructions: 

Detailed information about the data content, organization, and file formats is provided in the README files. For image data, individual TAR files for training and validation are provided for each city. Extra training data is also provided in separate TAR files. For point cloud data, individual ZIP files are provided for each city from DFC19. These include the original DFC19 training and validation point clouds with full UTM coordinates to enable experiments requiring geolocation.

Original DFC19 dataset:

https://ieee-dataport.org/open-access/data-fusion-contest-2019-dfc2019

We added new reference data to this extended US3D dataset to enable training and validation of models to predict geocentric pose, defined as an object's height above ground and orientation with respect to gravity. For details, please see our CVPR paper.

CVPR paper on geocentric pose:

http://openaccess.thecvf.com/content_CVPR_2020/papers/Christie_Learning_...

Source Data Attribution

All data used to produce the extended US3D dataset is publicly sourced. Data for DFC19 was derived from public satellite images released for IARPA CORED. New data for Atlanta was derived from public satellite images released for SpaceNet 4. Commercial satellite images were provided courtesy of DigitalGlobe. U. S. Cities LiDAR and vector data were made publicly available by the Homeland Security Infrastructure Program. 

CORE3D source data: https://spacenetchallenge.github.io/datasets/Core_3D_summary.html

SpaceNet 4 source data: https://spacenetchallenge.github.io/datasets/spacenet-OffNadir-summary.html

Test Sets

Validation data from DFC19 is extended here to include additional data for each tile. Test data is not provided for the DFC19 cities or for Atlanta. Test sets are available for the DFC19 challenge problems on CodaLab leaderboards. We plan to make test sets for all cities available for the geocentric pose problem in the near future. 

Single-view semantic 3D: https://competitions.codalab.org/competitions/20208

Pairwise semantic stereo: https://competitions.codalab.org/competitions/20212

Multi-view semantic stereo: https://competitions.codalab.org/competitions/20216

3D point cloud classification: https://competitions.codalab.org/competitions/20217

References

If you use the extended US3D dataset, please cite the following papers:

G. Christie, R. Munoz, K. Foster, S. Hagstrom, G. D. Hager, and M. Z. Brown, "Learning Geocentric Object Pose in Oblique Monocular Images," Proc. of Computer Vision and Pattern Recognition, 2020.

B. Le Saux, N. Yokoya, R. Hansch, and M. Brown, "2019 IEEE GRSS Data Fusion Contest: Large-Scale Semantic 3D Reconstruction [Technical Committees]", IEEE Geoscience and Remote Sensing Magazine, 2019.

M. Bosch, K. Foster, G. Christie, S. Wang, G. D. Hager, and M. Brown, "Semantic Stereo for Incidental Satellite Images," Proc. of Winter Applications of Computer Vision, 2019.

Categories:
1175 Views

This dataset includes the following data in supporting the submitted manuscript 'Datacube Parametrization-Based Model for Rough Surface Polarimetric Bistatic Scattering' to IEEE Transactions on Geoscience and Remote Sensing. 

  • LUT of coefficient c from fitting the contour level bounds
  • LUT of coefficient c from fitting the contour center shifts
  • specular scattering coefficients from the SEBCM simulated datacube
Categories:
28 Views

Depths to the various subsurface anomalies have been the primary interest in all the applications of magnetic methods of geophysical prospection. Depths to the subsurface geologic features of interest are more valuable and superior to all other properties in any correct subsurface geologic structural interpretations.

Instructions: 

The Neural Network Pattern Recognition help to select the appropriate data sets, create and train the network, and evaluate its performance using the cross-entropy and convolution matrices in MATLAB with fusion python. The Neural Network utilizes a Two-Layer feed-forward network to solve the pattern recognition problem with a six inputs data (i.e., the SI values), a Hidden Layer and a SoftMax Output Layer Neurons. The method excellently classified vector attributes when sufficient neuron in the hidden layer is selected. In this study, a six inputs data from the various SI values obtained was used in one hundred, (100) hidden layers of the neutrons, and weights combined with a six layers output of neutrons, and weights to generate the six-final output that represent each of the SI values depths as shown.

Categories:
217 Views

This dataset contains four types of geospatial events coverage in Indonesian news online portal: flood, traffic jam, earthquake, and fire. The corpus itself was composed of 926 manually annotated, disambiguated, and event extracted sentences that was filtered from 83 of 645,679 documents of our earlier news corpus based on four major geospatial events: flood, earthquake, fire, and accidents

Source: detik.com, kompas.com, cnnindonesia.com

Instructions: 

Download the dataset from Download Tab.

 

 

The main event extraction corpus is event-geoparsing-corpus.txt.

The disambiguation are listed on toponyms-disambiguated.txt.

 

 

event-geoparsing-corpus.txt Notes:

Each document inside Corpus is separated by ===

Each sentence within document is separated by empty line.

Regular line has four elements (word/POS Tag/Event/Argument):

e.g:

- Kerabat/NNP/O/O 

- RSCM/NN/B-ORG/Hospital-Arg

For LOC entities, there are additional two fields: (latitude, longitude) / <administrative_level> 

Jakarta/NNP/B-PLOC/Published-Arg/(-6.197602429787846, 106.83139222722116)/1

toponyms-disambiguated.txt :

Contains all toponyms (LOC) entities from corpus.txt, started with * (star symbol). Each star are having potential candidate referents. 

The correct disambiguation is started with --> otherwise it is started with --

Every document is also separated by ===

 

full list of argument roles for each event subtypes:

 

Argument Roles

Description

Subtype: FIRE-EVENT

1.        Reporter-Arg

2.        Published-Arg

3.        DeathVictim-Arg

4.        WoundVictim-Arg

5.        Place-Arg

6.        Facility-Arg

7.        Officer-Arg

8.        Time-Arg

9.        Street-Arg

10.      Official-Arg

11.      Hospital-Arg

12.      HouseBurnt-Arg

13.      AffectedRT-Arg

14.      DispatchedTrucks-Arg

15.      AffectedFamily-Arg

16.      MonetaryLoss-Arg

1.        News outlet (ARG)

2.        City of publication (LOC)

3.        How many people killed (ARG)

4.        How many people wounded (ARG)

5.        Geopolitical Entities of the place (LOC)

6.        Building related (ORG)

7.        Officer related (ORG)

8.        Time of the Event (ARG)

9.        Street of the Place (ARG)

10.      Official related or official statement (ORG)

11.      Hospital related (ORG)

12.      House burnt number (ARG)

13.      Number of RTs affected (ARG)

14.      Number of Firetrucks dispatched (ARG)

15.      Number of families affected (ARG)

16.      Loss of money (ARG)

Subtype: ACCIDENT-EVENT

1.        Reporter-Arg

2.        Published-Arg

3.        Point-Arg

4.        Vehicle-Arg

5.        Plate-Arg

6.        Place-Arg

7.        Hospital-Arg

8.        From-Arg

9.        To-Arg

10.      Time-Arg

11.      AffectedVehicle-Arg

12.      MonetaryLoss-Arg

1.        News company (ARG)

2.        City of Publication (LOC)

3.        Location offset of the Accident (ARG)

4.        Type of Vehicle (ARG)

5.        License Plate (ARG)

6.        Place of accident (LOC)

7.        Hospital related (ORG)

8.        Origin of collided vehicle (LOC)

9.        Destination of collided vehicle(LOC)

10.      Time of the event (ARG)

11.      Number of vehicles related (ARG)

12.      Loss of money (ARG)

Subtype: QUAKE-EVENT

1.        Reporter-Arg

2.        Duration-Arg

3.        Central-Arg

4.        Depth-Arg

5.        Hospital-Arg

6.        Time-Arg

7.        AffectedFacility-Arg

8.        AffectedHouse-Arg

9.        AffectedPeople-Arg

10.      Strength-Arg

 

1.        News outlet (ARG)

2.        Duration of the Quake(ARG)

3.        Center of Quake(ARG)

4.        Depth of Quake (LOC)

5.        Hospital related (ORG)

6.        Time of the event (ARG)

7.        Number of affected facilities(ARG)

8.        Number of affected House(ARG)

9.        Number of affected people(ARG)

10.      Quake's reported strength (ARG)

Subtype: FLOOD-EVENT

1.        Reporter-Arg

2.        Cause-Arg         

3.        Height-Arg

4.        Place-Arg

5.        AffectedDistrict-Arg

6.        AffectedHouse-Arg

7.        AffectedVillage-Arg

8.        AffectedFamily-Arg

9.        AffectedCity-Arg

10.      AffectedPeople-Arg

11.      Hospital-Arg

12.      Time-Arg

13.      Facility-Arg

14.      AffectedFields-Arg

 

1.        News outlet (ARG)

2.        Cause of the Flood (ARG) or (EVE of RAIN-EVENT or LANDSLIDE-EVENT)

3.        Height of Water(ARG)

4.        Place of Flood (LOC)

5.        Affected number of Districts

6.        Affected number of Houses

7.        Affected number of Villages

8.        Affected number of Families

9.        Affected number of Cities

10.      Affected number of People

11.      Hospital related (ORG)

12.      Time of the event (ARG)

13.      Facility affected by flood

14.      Area of fields (farms) (ARG)

 

 

Categories:
167 Views

Pages