WITH the advancement in sensor technology, huge amounts of data are being collected from various satellites. Hence, the task of target-based data retrieval and acquisition has become exceedingly challenging. Existing satellites essentially scan a vast overlapping region of the Earth using various sensing techniques, like multi-spectral, hyperspectral, Synthetic Aperture Radar (SAR), video, and compressed sensing, to name a few.

Instructions: 

A Zero-Shot Sketch-based Inter-Modal Object Retrieval Scheme for Remote Sensing Images

Email the authors at ushasi@iitb.ac.in for any query.

 

Classes in this dataset:

Airplane

Baseball Diamond

Buildings

Freeway

Golf Course

Harbor

Intersection

Mobile home park

Overpass

Parking lot

River

Runway

Storage tank

Tennis court

Paper

The paper is also available on ArXiv: A Zero-Shot Sketch-based Inter-Modal Object Retrieval Scheme for Remote Sensing Images

 

Feel free to cite the author, if the work is any help to you:

 

``` @InProceedings{Chaudhuri_2020_EoC, author = {Chaudhuri, Ushasi and Banerjee, Biplab and Bhattacharya, Avik and Datcu, Mihai}, title = {A Zero-Shot Sketch-based Inter-Modal Object Retrieval Scheme for Remote Sensing Images}, booktitle = {http://arxiv.org/abs/2008.05225}, month = {Aug}, year = {2020} }

 

Categories:
66 Views

As part of the 2018 IEEE GRSS Data Fusion Contest, the Hyperspectral Image Analysis Laboratory and the National Center for Airborne Laser Mapping (NCALM) at the University of Houston are pleased to release a unique multi-sensor optical geospatial representing challenging urban land-cover land-use classification task. The data were acquired by NCALM over the University of Houston campus and its neighborhood on February 16, 2017 between 16:31 and 18:18 GMT.

Instructions: 

Data files, as well as training and testing ground truth are provided in the enclosed zip file.

Categories:
559 Views

These last decades, Earth Observation brought quantities of new perspectives from geosciences to human activity monitoring. As more data became available, artificial intelligence techniques led to very successful results for understanding remote sensing data. Moreover, various acquisition techniques such as Synthetic Aperture Radar (SAR) can also be used for problems that could not be tackled only through optical images. This is the case for weather-related disasters such as floods or hurricanes, which are generally associated with large clouds cover.

Instructions: 

The dataset is composed of 336 sequences corresponding to areas in West and South-East Africa, Middle-East, and Australia. Each time series is located in a given folder named with the sequence ID (0001... 0336).

Two json files, S1list.json and S2list.json are provided to describe respectively the Sentinel-1 and Sentinel-2 images.The keys are the total number of images in the sequence, the folder name, the geography of the observed area, and the description of each image in the series. The SAR images description contains also the URLs to download the images.Each image is described by its acquisition date, its label (FLOODING: boolean), a boolean (FULL-DATA-COVERAGE: boolean) indicating if the area is fully or partially imaged, and the file prefix. For SAR images the orbit (ASCENDING or DESCENDING) is also indicated.

The Sentinel-2 images were obtained from the Mediaeval 2019 Multimedia Satellite Task [1] and are provided with Level 2A atmospheric correction. For one acquisition, there are 12 single-channel raster images provided corresponding to the different spectral bands.

The Sentinel-1 images were added to the dataset. The images are provided with radiometric calibration and range doppler terrain correction based on the SRTM digital elevation model. For one acquisition, two raster images are available corresponding to the polarimetry channels VV and VH.

The original dataset was split into 269 sequences for the train and 68 sequences for the test. Here all sequences are in the same folder.

 

To use this dataset please cite the following papers:

Flood Detection in Time Series of Optical and SAR Images, C. Rambour,N. Audebert,E. Koeniguer,B. Le Saux,  and M. Datcu, ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020, 1343--1346

The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop

 

This dataset contains modified Copernicus Sentinel data [2018-2019], processed by ESA.

[1] The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop

Categories:
1685 Views

Depths to the various subsurface anomalies have been the primary interest in all the applications of magnetic methods of geophysical prospection. Depths to the subsurface geologic features of interest are more valuable and superior to all other properties in any correct subsurface geologic structural interpretations.

Instructions: 

The Neural Network Pattern Recognition help to select the appropriate data sets, create and train the network, and evaluate its performance using the cross-entropy and convolution matrices in MATLAB with fusion python. The Neural Network utilizes a Two-Layer feed-forward network to solve the pattern recognition problem with a six inputs data (i.e., the SI values), a Hidden Layer and a SoftMax Output Layer Neurons. The method excellently classified vector attributes when sufficient neuron in the hidden layer is selected. In this study, a six inputs data from the various SI values obtained was used in one hundred, (100) hidden layers of the neutrons, and weights combined with a six layers output of neutrons, and weights to generate the six-final output that represent each of the SI values depths as shown.

Categories:
217 Views

This is a collection of paired thermal and visible ear images. Images in this dataset were acquired in different illumination conditions ranging between 2 and 10700 lux. There are total 2200 images of which 1100 are thermal images while the other 1100 are their corresponding visible images. Images consisted of left and right ear images of 55 subjects. Images were capture in 5 illumination conditiond for every subjects. This dataset was developed for illumination invariant ear recognition study. In addition it can also be useful for thermal and visible image fusion research.

 

Instructions: 

Any work made public, in whatever form, based directly or indirectly on any part of the DATABASE will include the following reference: 

Syed Zainal Ariffin, S. M. Z., Jamil, N., & Megat Abdul Rahman, P. N. (2016). DIAST Variability Illuminated Thermal and Visible Ear Images Datasets. In Proceeding of 2016 Signal Processing : Algorithms, Architectures, Arrangements, and Applications (SPA), 2016. DOI : 10.1109/SPA.2016.7763611

 

Categories:
137 Views

This is a dataset having paired thermal-visual images collected over 1.5 years from different locations in Chitrakoot, India and Prayagraj, India. The images can be broadly classified into greenery, urban, historical buildings and crowd data.

The crowd data was collected from the Maha Kumbh Mela 2019, Prayagraj, which is the largest religious fair in the world and is held every 6 years.

 

Instructions: 

The images are classified according to the thermal imager they were used to capture them with.

The SONEL thermal images are inside register_sonel.

The FLIR images are in register_flir and register_flir_old. There are 2 image zip files because FLIR thermal imagers reuse the image names after a certain limit.

The unregistered images are kept as files inside each base zip as unreg folders.

 

The work associated with this database, which details the registration method, the overall logic behind the creation of this database, resizing factors and the reason why there are unregistered images, is a work on thermal image colorization has been submited to IEEE for consideration, and is currently pre printed and available on arXiv.

We ask that you refer to this work when using this database for your work.

A Novel Registration & Colorization Technique for Thermal to Cross Domain Colorized Images 

 

If you find any problem with the data in this dataset (missing images, wrong names, superfluous python files etc), please let us know and we will try to correct the same.

 

The naming classification is as follows:

·         FLIR

o   Registered images are named as <name>.jpg and <name>_color.png with the png file being the optical registered image

o   The raw files are named as FLIR<#number>.jpg and FLIR<#number+1>.jpg where the initial file is the thermal image

o   The unreg_flir folder contains just the raw files

·         SONEL

o   Registered images are named as <name>.jpg and <name>_color.png with the png file being the optical registered image

o   The raw files are named as IRI_<name>.jpg and VIS_< name >.jpg where the IRI file is the thermal image and VIS is the visual image

o   The unreg folder contains just the raw files

Categories:
394 Views

The 2020 Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (GRSS) and the Technical University of Munich, aims to promote research in large-scale land cover mapping based on weakly supervised learning from globally available multimodal satellite data. The task is to train a machine learning model for global land cover mapping based on weakly annotated samples.

Last Updated On: 
Mon, 01/25/2021 - 09:03

The dataset contains high-resolution microscopy images and confocal spectra of semiconducting single-wall carbon nanotubes. Carbon nanotubes allow down-scaling of electronic components to the nano-scale. There is initial evidence from Monte Carlo simulations that microscopy images with high digital resolution show energy information in the Bessel wave pattern that is visible in these images. In this dataset, images from Silicon and InGaAs cameras, as well as spectra, give valuable insights into the spectroscopic properties of these single-photon emitters.

Instructions: 

The dataset is generated with docker containers from the measurement data. The measured data is in Igor Binary Waves. The specific format can be read with a custom reader an processed with various tools.

Processing will be applied automatically to various output formats using docker containers.

 

See current development status and dataset description will be updated on

https://gitlab.com/ukos-git/nanotubes

Categories:
428 Views

Pages