This dataset was created from all Landsat-8 images from South America in the year 2018. More than 31 thousand images were processed (15 TB of data), and approximately on half of them active fire pixels were found. The Landsat-8 sensor has 30 meters of spatial resolution (1 panchromatic band of 15m), 16 bits of radiometric resolution and 16 days of temporal resolution (revisit). The images in our dataset are in TIFF (geotiff) format with 10 bands (excluding the 15m panchromatic band).

Instructions: 

The images in our dataset are in georeferenced TIFF (geotiff) format with 10 bands. We cropped the original Landsat-8 scenes (with ~7,600 x 7,600 pixels) into image patches with 128 x 128 pixels by using a stride overlap of 64 pixels (vertical and horizontal). The masks are in binary format where True (1) represents fire and False (0) represents background and they were generated from the conditions set by Schroeder et al. (2016). We used the Schroeder conditions to process each patch, producing over 1 million patches with at least one fire pixel and the same amount of patches with no fire pixels, randomly selected from the original images.

The dataset is organized as follow. 

It is divided into South American regions for easy downloading. For each region of South America we have a zip file for images of active fire, its masks, and non-fire images. For example:

 - Uruguay-fire.zip

 - Uruguay-mask.zip

 - Uruguay-nonfire.zip

Within each South American region zip files there are the corresponding zip files to each Landsat-8 WRS (Worldwide Reference System). For example:

- Uruguay-fire.zip;

      - 222083.zip

      - 222084.zip

      - 223082.zip

      - 223083.zip

      - 223084.zip

      - 224082.zip

      - 224083.zip

      - 224084.zip

      - 225081.zip

      - 225082.zip

      - 225083.zip

      - 225084.zip

Within each of these Landsat-8 WRS zip files there are all the corresponding 128x128 image patches for the year 2018. 

 

Categories:
928 Views

This dataset extends the Urban Semantic 3D (US3D) dataset developed and first released for the 2019 IEEE GRSS Data Fusion Contest (DFC19). We provide additional geographic tiles to supplement the DFC19 training data and also new data for each tile to enable training and validation of models to predict geocentric pose, defined as an object's height above ground and orientation with respect to gravity. We also add to the DFC19 data from Jacksonville, Florida and Omaha, Nebraska with new geographic tiles from Atlanta, Georgia.

Instructions: 

Detailed information about the data content, organization, and file formats is provided in the README files. For image data, individual TAR files for training and validation are provided for each city. Extra training data is also provided in separate TAR files. For point cloud data, individual ZIP files are provided for each city from DFC19. These include the original DFC19 training and validation point clouds with full UTM coordinates to enable experiments requiring geolocation.

Original DFC19 dataset:

https://ieee-dataport.org/open-access/data-fusion-contest-2019-dfc2019

We added new reference data to this extended US3D dataset to enable training and validation of models to predict geocentric pose, defined as an object's height above ground and orientation with respect to gravity. For details, please see our CVPR paper.

CVPR paper on geocentric pose:

http://openaccess.thecvf.com/content_CVPR_2020/papers/Christie_Learning_...

Source Data Attribution

All data used to produce the extended US3D dataset is publicly sourced. Data for DFC19 was derived from public satellite images released for IARPA CORED. New data for Atlanta was derived from public satellite images released for SpaceNet 4. Commercial satellite images were provided courtesy of DigitalGlobe. U. S. Cities LiDAR and vector data were made publicly available by the Homeland Security Infrastructure Program. 

CORE3D source data: https://spacenetchallenge.github.io/datasets/Core_3D_summary.html

SpaceNet 4 source data: https://spacenetchallenge.github.io/datasets/spacenet-OffNadir-summary.html

Test Sets

Validation data from DFC19 is extended here to include additional data for each tile. Test data is not provided for the DFC19 cities or for Atlanta. Test sets are available for the DFC19 challenge problems on CodaLab leaderboards. We plan to make test sets for all cities available for the geocentric pose problem in the near future. 

Single-view semantic 3D: https://competitions.codalab.org/competitions/20208

Pairwise semantic stereo: https://competitions.codalab.org/competitions/20212

Multi-view semantic stereo: https://competitions.codalab.org/competitions/20216

3D point cloud classification: https://competitions.codalab.org/competitions/20217

References

If you use the extended US3D dataset, please cite the following papers:

G. Christie, R. Munoz, K. Foster, S. Hagstrom, G. D. Hager, and M. Z. Brown, "Learning Geocentric Object Pose in Oblique Monocular Images," Proc. of Computer Vision and Pattern Recognition, 2020.

B. Le Saux, N. Yokoya, R. Hansch, and M. Brown, "2019 IEEE GRSS Data Fusion Contest: Large-Scale Semantic 3D Reconstruction [Technical Committees]", IEEE Geoscience and Remote Sensing Magazine, 2019.

M. Bosch, K. Foster, G. Christie, S. Wang, G. D. Hager, and M. Brown, "Semantic Stereo for Incidental Satellite Images," Proc. of Winter Applications of Computer Vision, 2019.

Categories:
811 Views

This dataset includes the following data in supporting the submitted manuscript 'Datacube Parametrization-Based Model for Rough Surface Polarimetric Bistatic Scattering' to IEEE Transactions on Geoscience and Remote Sensing. 

  • LUT of coefficient c from fitting the contour level bounds
  • LUT of coefficient c from fitting the contour center shifts
  • specular scattering coefficients from the SEBCM simulated datacube
Categories:
27 Views

Depths to the various subsurface anomalies have been the primary interest in all the applications of magnetic methods of geophysical prospection. Depths to the subsurface geologic features of interest are more valuable and superior to all other properties in any correct subsurface geologic structural interpretations.

Instructions: 

The Neural Network Pattern Recognition help to select the appropriate data sets, create and train the network, and evaluate its performance using the cross-entropy and convolution matrices in MATLAB with fusion python. The Neural Network utilizes a Two-Layer feed-forward network to solve the pattern recognition problem with a six inputs data (i.e., the SI values), a Hidden Layer and a SoftMax Output Layer Neurons. The method excellently classified vector attributes when sufficient neuron in the hidden layer is selected. In this study, a six inputs data from the various SI values obtained was used in one hundred, (100) hidden layers of the neutrons, and weights combined with a six layers output of neutrons, and weights to generate the six-final output that represent each of the SI values depths as shown.

Categories:
187 Views

This dataset contains four types of geospatial events coverage in Indonesian news online portal: flood, traffic jam, earthquake, and fire. The corpus itself was composed of 926 manually annotated, disambiguated, and event extracted sentences that was filtered from 83 of 645,679 documents of our earlier news corpus based on four major geospatial events: flood, earthquake, fire, and accidents

Source: detik.com, kompas.com, cnnindonesia.com

Instructions: 

Download the dataset from Download Tab.

 

 

The main event extraction corpus is event-geoparsing-corpus.txt.

The disambiguation are listed on toponyms-disambiguated.txt.

 

 

event-geoparsing-corpus.txt Notes:

Each document inside Corpus is separated by ===

Each sentence within document is separated by empty line.

Regular line has four elements (word/POS Tag/Event/Argument):

e.g:

- Kerabat/NNP/O/O 

- RSCM/NN/B-ORG/Hospital-Arg

For LOC entities, there are additional two fields: (latitude, longitude) / <administrative_level> 

Jakarta/NNP/B-PLOC/Published-Arg/(-6.197602429787846, 106.83139222722116)/1

toponyms-disambiguated.txt :

Contains all toponyms (LOC) entities from corpus.txt, started with * (star symbol). Each star are having potential candidate referents. 

The correct disambiguation is started with --> otherwise it is started with --

Every document is also separated by ===

 

full list of argument roles for each event subtypes:

 

Argument Roles

Description

Subtype: FIRE-EVENT

1.        Reporter-Arg

2.        Published-Arg

3.        DeathVictim-Arg

4.        WoundVictim-Arg

5.        Place-Arg

6.        Facility-Arg

7.        Officer-Arg

8.        Time-Arg

9.        Street-Arg

10.      Official-Arg

11.      Hospital-Arg

12.      HouseBurnt-Arg

13.      AffectedRT-Arg

14.      DispatchedTrucks-Arg

15.      AffectedFamily-Arg

16.      MonetaryLoss-Arg

1.        News outlet (ARG)

2.        City of publication (LOC)

3.        How many people killed (ARG)

4.        How many people wounded (ARG)

5.        Geopolitical Entities of the place (LOC)

6.        Building related (ORG)

7.        Officer related (ORG)

8.        Time of the Event (ARG)

9.        Street of the Place (ARG)

10.      Official related or official statement (ORG)

11.      Hospital related (ORG)

12.      House burnt number (ARG)

13.      Number of RTs affected (ARG)

14.      Number of Firetrucks dispatched (ARG)

15.      Number of families affected (ARG)

16.      Loss of money (ARG)

Subtype: ACCIDENT-EVENT

1.        Reporter-Arg

2.        Published-Arg

3.        Point-Arg

4.        Vehicle-Arg

5.        Plate-Arg

6.        Place-Arg

7.        Hospital-Arg

8.        From-Arg

9.        To-Arg

10.      Time-Arg

11.      AffectedVehicle-Arg

12.      MonetaryLoss-Arg

1.        News company (ARG)

2.        City of Publication (LOC)

3.        Location offset of the Accident (ARG)

4.        Type of Vehicle (ARG)

5.        License Plate (ARG)

6.        Place of accident (LOC)

7.        Hospital related (ORG)

8.        Origin of collided vehicle (LOC)

9.        Destination of collided vehicle(LOC)

10.      Time of the event (ARG)

11.      Number of vehicles related (ARG)

12.      Loss of money (ARG)

Subtype: QUAKE-EVENT

1.        Reporter-Arg

2.        Duration-Arg

3.        Central-Arg

4.        Depth-Arg

5.        Hospital-Arg

6.        Time-Arg

7.        AffectedFacility-Arg

8.        AffectedHouse-Arg

9.        AffectedPeople-Arg

10.      Strength-Arg

 

1.        News outlet (ARG)

2.        Duration of the Quake(ARG)

3.        Center of Quake(ARG)

4.        Depth of Quake (LOC)

5.        Hospital related (ORG)

6.        Time of the event (ARG)

7.        Number of affected facilities(ARG)

8.        Number of affected House(ARG)

9.        Number of affected people(ARG)

10.      Quake's reported strength (ARG)

Subtype: FLOOD-EVENT

1.        Reporter-Arg

2.        Cause-Arg         

3.        Height-Arg

4.        Place-Arg

5.        AffectedDistrict-Arg

6.        AffectedHouse-Arg

7.        AffectedVillage-Arg

8.        AffectedFamily-Arg

9.        AffectedCity-Arg

10.      AffectedPeople-Arg

11.      Hospital-Arg

12.      Time-Arg

13.      Facility-Arg

14.      AffectedFields-Arg

 

1.        News outlet (ARG)

2.        Cause of the Flood (ARG) or (EVE of RAIN-EVENT or LANDSLIDE-EVENT)

3.        Height of Water(ARG)

4.        Place of Flood (LOC)

5.        Affected number of Districts

6.        Affected number of Houses

7.        Affected number of Villages

8.        Affected number of Families

9.        Affected number of Cities

10.      Affected number of People

11.      Hospital related (ORG)

12.      Time of the event (ARG)

13.      Facility affected by flood

14.      Area of fields (farms) (ARG)

 

 

Categories:
156 Views

Synthetic Aperture Radar (SAR) images can be extensively informative owing to their resolution and availability. However, the removal of speckle-noise from these requires several pre-processing steps. In recent years, deep learning-based techniques have brought significant improvement in the domain of denoising and image restoration. However, further research has been hampered by the lack of availability of data suitable for training deep neural network-based systems. With this paper, we propose a standard synthetic data set for the training of speckle reduction algorithms.

Instructions: 

In Virtual SAR we have infused images with varying level of noise, which helps in improving the accuray fo blind denoising task. The holdout set can be created using images from USC SIPI Aerials database and the the provided matlab script (preprocess_holdout.m) tested on Matlab R2019b.

 

The usage for research purposes is for free. If you use this dataset, please cite the following paper along with the dataset: Virtual SAR: A Synthetic Dataset for Deep Learning based Speckle Noise Reduction Algorithms

Categories:
726 Views

This dataset includes binary files of radiometric measurement sessions 2018-2019. Measurements of microwave descending radiation in the band of resonance absorption of water vapor 18 - 27.2 GHz were performed. The observations were carried out by means of special microwave multichannel (47 channels) radiometer-spectrometer developed in Kotel'nikov Institute of Radioengineering and Electronic of RAS Special Design Bureau. Radiometer was located in Fryazino, Moscow Region, Russian Federation.

Categories:
72 Views

This dataset accompanies a paper titled "Detection of Metallic Objects in Mineralised Soil Using Magnetic Induction Spectroscopy". 

Instructions: 

Every sweep of the detector over an object is contained in a different file, with the following file naming convention being used: ___.h5, where is globally unique identifier for the file. Each file is a HDF5 file generated using Pandas, containing a single DataFrame. The DataFrame contains 8 columns. The first three correspond to the x-, y- and z-position (in cm) relative to an arbitrary datum. The arbitrary datum stays constant for all sweeps over all objects in a given combination of soil and depth. The other 5 columns contain the complex transimpedance values as measured by the MIS system, after calibration against the ferrite piece. Due to experimental constraints, there is no data for one of the rocks buried at 10 cm depth in "Rocky" soil.

Categories:
64 Views

We collected experimental field data with a prototype open-ended waveguide sensor (WR975) operating between 600 MHz - 1300 MHz. With our prototype sensor we collected reflection coefficient measurements at a total of 50 unique 1-ft^2 sites across two separate established cranberry beds in central Wisconsin. The sensor was placed directly on top of cranberry-crop bed canopies, and we obtained 12 independent reflection coefficient measurements (each defined as one S11 sweep across frequency) at each 1-ft^2 site by randomly rotating and/or translating the sensor aperture above each site. After

Categories:
112 Views

PS_DISP is a trial bundled script written on bash shell and Matlab code. The script requires Generic Mapping Tools (GMT) and Matlab Software and runs under Linux operating system. The purpose of PS DISP is to generate 2D or 3D vectors displacement from InSAR both ascending and descending orbit either from the mean velocity or time-series data. The 1.5 beta version includes the computation of the 3D field using an optimized approach with variance component estimation (VCE).

Categories:
119 Views

Pages