The data relates to a study to captured deciduous broadleaf Bidirectional Scattering Distribution Functions (BSDFs) from the visible through shortwave-infrared (SWIR) spectral regions (350-2500 nm) and accurately modeled the BSDF for extension to any illumination angle, viewing zenith, or azimuthal angle. Measurements were made from three species of large trees, Norway maple (Acer platanoides), American sweetgum (Liquidambar styraciflua), and northern red oak (Quercus rubra).
There are three different file types in this database. The first are .raw files that are ascii files of the estimated BSDF data from measurements (Note that the measurements are really bi-conical), the second are .py python files for reading, plotting, and fitting the data to a microfacet model. The last file type are .txt files of the microfacet fit parameters previously found.
This dataset is compiled from five years of observation from the Global Precipitation Measurement (GPM) core observatory Microwave Imager (GMI) and Dual-Frequency Precipitation Radar (DPR). Retrieved emissivites and surface backscatter cross sections are gridded at quarter-degree, monthly resolution separately for non-snow-covered land, snow-covered land, and sea ice.
These data are stored as numpy (.npy) files. Sample reading and plotting code is provided by the Jupyter notebook.
Here we present OpenSARUrban: a Sentinel-1 dataset dedicated to the content- related interpretation of urban SAR images, including a well- defined hierarchical annotation scheme, data collection, well- established procedures for dataset compilation and organization as well as properties, visualizations, and applications of this dataset.
The Contest: Goals and Organisation
The 2019 Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (GRSS), the Johns Hopkins University (JHU), and the Intelligence Advanced Research Projects Activity (IARPA), aimed to promote research in semantic 3D reconstruction and stereo using machine intelligence and deep learning applied to satellite images.
- Participants to the benchmark are intended to submit:
- 2D semantic maps and nDSM/disparity/DSM maps in raster format (similar to the tif file of the training set) for Tracks 1, 2, and 3
- 3D semantic predictions in ASCII text files (similar to the text file of the training set) for Track 4
These results will be submitted to the Codalab competition websites for evaluation:
- Ranking among the participants will be based on:
- mIoU-3 for Tracks 1, 2, and 3
- mIoU for Track 4
The Contest: Goals and Organization
The 2017 IEEE GRSS Data Fusion Contest, organized by the IEEE GRSS Image Analysis and Data Fusion Technical Committee, aimed at promoting progress on fusion and analysis methodologies for multisource remote sensing data.
The 2017 Data Fusion Contest will consist in a classification benchmark. The task to perform is classification of land use (more precisely, Local Climate Zones or LCZ) in various urban environments. Several cities have been selected all over the world to test the ability of both LCZ prediction and domain adaptation. Input data are multi-temporal, multi-source and multi-mode (image and semantic layers). 5 cities are considered for training: Berlin, Hong Kong, Paris, Rome and Sao Paulo.
Each city folder contains:grid/ sampling gridlandsat_8/ Landsat 8 images at various dates (resampled at 100m res., split in selected bands)lcz/ Local Climate Zones as rasters (see below)osm_raster/ Rasters with areas (buildings, land-use, water) derived from OpenStreetMap layersosm_vector/ Vector data with OpenStreetMap zones and linessentinel_2/ Sentinel2 image (resampled at 100m res., split in selected bands)
Local Climate Zones
The lcz/ folder contains:`<city>_lcz_GT.tif`: The ground-truth for local climate zones, as a raster. It is single-band, in byte format. The pixel values range from 1 to 17 (maximum number of classes). Unclassified pixels have 0 value.`<city>_lcz_col.tif`: Color, georeferenced LCZ map, for visualization convenience only.Class nembers are the following:10 urban LCZs corresponding to various built types:
- 1. Compact high-rise;
- 2. Compact midrise;
- 3. Compact low-rise;
- 4. Open high-rise;
- 5. Open midrise;
- 6. Open low-rise;
- 7. Lightweight low-rise;
- 8. Large low-rise;
- 9. Sparsely built;
- 10. Heavy industry.
7 rural LCZs corresponding to various land cover types:
- 11. Dense trees;
- 12. Scattered trees;
- 13. Bush and scrub;
- 14. Low plants;
- 15. Bare rock or paved;
- 16. Bare soil or sand;
- 17. Water
The 2017 IEEE GRSS Data Fusion Contest is organized by the Image Analysis and Data Fusion Technical Committee of IEEE GRSSLandsat 8 data available from the U.S. Geological Survey (https://www.usgs.gov/).OpenStreetMap Data © OpenStreetMap contributors, available under the Open Database Licence - http://www.openstreetmap.org/copyright. Original Copernicus Sentinel Data 2016 available from the European Space Agency (https://sentinel.esa.int).The Contest is being organized in collaboration with the WUDAPT (http://www.wudapt.org/) and GeoWIKI (http://geo-wiki.org/) initiatives. The IADF TC chairs would like to thank the organizers and the IEEE GRSS for continuously supporting the annual Data Fusion Contest through funding and resources.
The Data Fusion Contest 2016: Goals and Organization
The 2016 IEEE GRSS Data Fusion Contest, organized by the IEEE GRSS Image Analysis and Data Fusion Technical Committee, aimed at promoting progress on fusion and analysis methodologies for multisource remote sensing data.
New multi-source, multi-temporal data including Very High Resolution (VHR) multi-temporal imagery and video from space were released. First, VHR images (DEIMOS-2 standard products) acquired at two different dates, before and after orthorectification:
After unzip, each directory contains:
original GeoTiff for panchromatic (VHR) and multispectral (4bands) images,
quick-view image for both in png format,
capture parameters (RPC file).
The recent interest in using deep learning for seismic interpretation tasks, such as facies classification, has been facing a significant obstacle, namely the absence of large publicly available annotated datasets for training and testing models. As a result, researchers have often resorted to annotating their own training and testing data. However, different researchers may annotate different classes, or use different train and test splits.
#Basic Intructions for usage
Make sure you have the following folder structure in the data directory after you unzip the file:
│ ├── test1_labels.npy
│ ├── test1_seismic.npy
│ ├── test2_labels.npy
│ └── test2_seismic.npy
The train and test data are in NumPy .npy format ideally suited for Python. You can open these file in Python as such:
import numpy as np
train_seismic = np.load('data/train/train_seismic.npy')
Make sure the testing data is only used once after all models are trained. Using the test set multiple times makes it a validation set.
We also provide fault planes, and the raw horizons that were used to generate the data volumes in addition to the processed data volumes before splitting to training and testing.
1- Netherlands Offshore F3 block. [Online]. Available: https://opendtect.org/osr/pmwiki.php/Main/Netherlands OffshoreF3BlockComplete4GB
2- Alaudah, Yazeed, et al. "A machine learning benchmark for facies classification." Interpretation 7.3 (2019): 1-51.
This dataset was developed at the School of Electrical and Computer Engineering (ECE) at the Georgia Institute of Technology as part of the ongoing activities at the Center for Energy and Geo-Processing (CeGP) at Georgia Tech and KFUPM. LANDMASS stands for “LArge North-Sea Dataset of Migrated Aggregated Seismic Structures”. This dataset was extracted from the North Sea F3 block under the Creative Commons license (CC BY-SA 3.0).
The LANDMASS database includes two different datasets. The first, denoted LANDMASS-1, contains 17667 small “patches” of size 99x99 pixels. it includes 9385 Horizon patches, 5140 chaotic patches, 1251 Fault patches, and 1891 Salt Dome patches. The images in this database have values in the range [-1,1]. The second dataset, denoted LANDMASS-2, contains 4000 images. Each image is of size 150x300 pixels and normalized to values in the range [0,1]. Each one of the four classes has 1000 images. Sample images from each database for each class can be found under the /samples file.