Wildfires are one of the deadliest and dangerous natural disasters in the world. Wildfires burn millions of forests and they put many lives of humans and animals in danger. Predicting fire behavior can help firefighters to have better fire management and scheduling for future incidents and also it reduces the life risks for the firefighters. Recent advance in aerial images shows that they can be beneficial in wildfire studies.

Instructions: 

The aerial pile burn detection dataset consists of different repositories. The first one is a raw video recorded using the Zenmuse X4S camera. The format of this file is MP4. The duration of the video is 966 seconds with a Frame Per Second (FPS) of 29. The size of this repository is 1.2 GB. The first video was used for the "Fire-vs-NoFire" image classification problem (training/validation dataset). The second one is a raw video recorded using the Zenmuse X4S camera. The duration of the video is 966 seconds with a Frame Per Second (FPS) of 29. The size of this repository is 503 MB. This video shows the behavior of one pile from the start of burning. The resolution of these two videos is 1280x720.

The third video is 89 seconds of heatmap footage of WhiteHot from the thermal camera. The size of this repository is 45 MB. The fourth one is 305 seconds of GreentHot heatmap with a size of 153 MB. The fifth repository is 25 mins of fusion heatmap with a size of 2.83 GB. All these three thermal videos are recorded by the FLIR Vue Pro R thermal camera with an FPS of 30 and a resolution of 640x512. The format of all these videos is MOV.

The sixth video is 17 mins long from the DJI Phantom 3 camera. This footage is used for the purpose of the "Fire-vs-NoFire" image classification problem (test dataset). The FPS is 30, the size is 32 GB, the resolution is 3840x2160, and the format is MOV.

The seventh repository is 39,375 frames that resized to 254x254 for the "Fire-vs-NoFire" image classification problem (Training/Validation dataset). The size of this repository is 1.3 GB and the format is JPEG.

The eighth repository is 8,617 frames that resized to 254x254 for the "Fire-vs-NoFire" image classification problem (Test dataset). The size of this repository is 301 MB and the format is JPEG.

The ninth repository is 2,003 fire frames with a resolution of 3480x2160 for the fire segmentation problem (Train/Val/Test dataset). The size of this repository is 5.3 GB and the format is JPEG.

The last repository is 2,003 ground truth mask frames regarding the fire segmentation problem. The resolution of each mask is 3480x2160. The size of this repository is 23.4 MB.

The published article is available here:

https://www.sciencedirect.com/science/article/pii/S1389128621001201

The preprint article of this dataset is available here:

https://arxiv.org/pdf/2012.14036.pdf

For more information please find the Table at: 

https://github.com/AlirezaShamsoshoara/Fire-Detection-UAV-Aerial-Image-Classification-Segmentation-UnmannedAerialVehicle

To find other projects and articles in our group:

https://www.cefns.nau.edu/~fa334/

Categories:
10717 Views

Here we introduce so-far the largest subject-rated database of its kind, namely, "Effect of Sugarcane vegetation on path-loss between CC2650 and CC2538 SoC 32-bit Arm Cortex-M3 based sensor nodes operating at 2.4 GHz Radio Frequency (RF)".

Categories:
277 Views

Here we introduce so-far the largest subject-rated database of its kind, namely, "Effect of Paddy vegetation on path-loss between CC2650 SoC 32-bit Arm Cortex-M3 based sensor nodes operating at 2.4 GHz Radio Frequency (RF)". This database contains received signal strength measurements collected through campaigns in the IEEE 802.15.4 standard precision agricultural monitoring infrastructure developed for Paddy rice crop monitoring from period 03/07/2019 to 18/11/2019.

Categories:
280 Views

Here we introduce so-far the largest subject-rated database of its kind, namely, "Effect of Paddy Rice vegetation on received signal strength between CC2538 SoC 32-bit Arm Cortex-M3 based sensor nodes operating at 2.4 GHz Radio Frequency (RF)". This database contains received signal strength measurements collected through campaigns in the IEEE 802.15.4 standard precision agricultural monitoring infrastructure developed for Paddy Rice crop monitoring from the period 01/07/2020 to 03/11/2020.

Categories:
159 Views

Here we introduce so-far the largest subject-rated database of its kind, namely, "Effect of Millet vegetation on path-loss between CC2538 SoC 32-bit Arm Cortex-M3 based sensor nodes operating at 2.4 GHz Radio Frequency (RF)". This database contains received signal strength measurements collected through campaigns in the IEEE 802.15.4 standard precision agricultural monitoring infrastructure developed for millet crop monitoring from period 03/06/2020 to 04/10/2020.

Categories:
134 Views

This dataset consists of orthorectified aerial photographs, LiDAR derived digital elevation models and segmentation maps with 10 classes, acquired through the open data program of the German state North Rhine-Westphalia (https://www.opengeodata.nrw.de/produkte/) and refined with OpenStreeMap. Please check the license information (http://www.govdata.de/dl-de/by-2-0).

Instructions: 

Dataset description

The data was mostly acquired over urban areas in North-Rhine Westphalia, Germany. Since the acquisition dates for the aerial photographs and LiDAR do not match exactly, there can be discrepancies in what they show and in which season, e.g., trees change their leaves or lose them in autumn. In our experience, these differences are not drastic but should be kept in mind.

We have included two Python scripts. plot_examples.py creates the example image used on this website. calc_and_plot_stats.py calculates and plots the class statistics. Furthermore, we published the code to create the dataset at https://github.com/gbaier/geonrw, which makes it easy to extend the dataset with other areas in North-Rhine Westphalia. The repository also contains a PyTorch data loader.

This multimodal dataset should be useful for a variety of tasks. Image segmentation using multiple inputs, height estimation from the aerial photographs, or semantic image synthesis.

Organization

Similar to the original source of the data (https://www.opengeodata.nrw.de/produkte/geobasis/lbi/dop/dop_jp2_f10_paketiert/), we organize all samples by the city they were acquired over. Their filenames, e.g., 345_5668_rgb.jp2 consists of the UTM zone 32N coordinates and the datatype (RGB, DEM or seg for land cover).

File formats

All data is geocoded and can be opened using QGIS (https://www.qgis.org/). The aerial photographs are stored as JPEG2000 files, the land cover maps and digital elevation models both as GeoTIFFs. The accompanying scripts show how to read the data into Python.

Categories:
1001 Views

The simulated InSAR building dataset contains 312 simulated SAR image pairs generated from 39 different building models. Each building model is simulated at 8 viewing-angles. The sample number is 216 of the train set and is 96 of the test set. Each simulated InSAR sample contains three channels: master SAR image, slave SAR image, and interferometric phase image. This dataset serves the CVCMFF Net for building semantic segmentation of InSAR images.

Categories:
266 Views

The current maturity of autonomous underwater vehicles (AUVs) has made their deployment practical and cost-effective, such that many scientific, industrial and military applications now include AUV operations. However, the logistical difficulties and high costs of operating at-sea are still critical limiting factors in further technology development, the benchmarking of new techniques and the reproducibility of research results. To overcome this problem, we present a freely available dataset suitable to test control, navigation, sensor processing algorithms and others tasks.

Instructions: 

This repository contains the AURORA dataset, a multi sensor dataset for robotic ocean exploration.

It is accompanied by the report "AURORA, A multi sensor dataset for robotic ocean exploration", by Marco Bernardi, Brett Hosking, Chiara Petrioli, Brian J. Bett, Daniel Jones, Veerle Huvenne, Rachel Marlow, Maaten Furlong, Steve McPhail and Andrea Munafo.

Exemplar python code is provided at https://github.com/noc-mars/aurora.

 

The dataset provided in this repository includes data collected during cruise James Cook 125 (JC125) of the National Oceanography Centre, using the Autonomous Underwater Vehicle Autosub 6000. It is composed of two AUV missions: M86 and M86.

  • M86 contains a sample of multi-beam echosounder data in .all format. It also contains CTD and navigation data in .csv format.

  • M87 contains a sample of the camera and side-scan sonar data. The camera data contains 8 of 45320 images of the original dataset. The camera data are provided in .raw format (pixels are ordered in Bayer format). The size of each image is of size 2448x2048. The side-scan sonar folder contains a one ping sample of side-scan data provided in .xtf format.

  • The AUV navigation file is provided as part of the data available in each mission in .csv form.

 

The dataset is approximately 200GB in size. A smaller sample is provided at https://github.com/noc-mars/aurora_dataset_sample and contains a sample of about 200MB.

Each individual group of data (CTD, multibeam, side scan sonar, vertical camera) for each mission (M86, M87) is also available to be downloaded as a separate file. 

Categories:
683 Views

The files here support the analysis presented in the paper in IEEE Transactions on Geoscience and Remote Sensing, "Snow Property Inversion from Remote Sensing (SPIReS): A Generalized Multispectral Unmixing Approach with Examples from MODIS and Landsat 8 OLI" Spectral mixture analysis has a history in mapping snow, especially where mixed pixels prevail. Using multiple spectral bands rather than band ratios or band indices, retrievals of snow properties that affect its albedo lead to more accurate estimates than widely used age-based models of albedo evolution.

Instructions: 

These HDF5 files contain snow cover over the Sierra Nevada USA from water year 2001-2019 using the Snow Property Inversion from Remote Sensing (SPIRES) approach. Each file covers one water year (October through September). They are stored with block compression so individual days can be read without reading the whole file. The method is described by E.H. Bair, T. Stillinger, and J. Dozier, "Snow Property Inversion from Remote Sensing (SPIReS): A generalized multispectral unmixing approach with examples from MODIS and Landsat 8 OLI," IEEE Trans. Geosci. Remote Sens., 2020 (manuscript number TGRS-2020-02003). Source code is at https://github.com/edwardbair/SPIRES The projection is the Albers equaconic (also called the California Teale projection) with WGS84 datum and 500 m square pixels. The standard meridian for the projection is 120 W; the standard parallels are 34 N and 40.5 N; False Northing is -40,000,000. The h5 files can be read with several software packages. We use MATLAB. They contain: MATLAB date numbers, ISO dates in format YYYYDDD, geographic information, spacetime cubes of snow fraction, raw (unadjusted) snow fraction, grain size (um), and dust (ppmw). The spacetime cubes have a slice for each day, begin on October 1 and end on September 30.

Categories:
96 Views

These last decades, Earth Observation brought quantities of new perspectives from geosciences to human activity monitoring. As more data became available, artificial intelligence techniques led to very successful results for understanding remote sensing data. Moreover, various acquisition techniques such as Synthetic Aperture Radar (SAR) can also be used for problems that could not be tackled only through optical images. This is the case for weather-related disasters such as floods or hurricanes, which are generally associated with large clouds cover.

Instructions: 

The dataset is composed of 336 sequences corresponding to areas in West and South-East Africa, Middle-East, and Australia. Each time series is located in a given folder named with the sequence ID (0001... 0336).

Two json files, S1list.json and S2list.json are provided to describe respectively the Sentinel-1 and Sentinel-2 images.The keys are the total number of images in the sequence, the folder name, the geography of the observed area, and the description of each image in the series. The SAR images description contains also the URLs to download the images.Each image is described by its acquisition date, its label (FLOODING: boolean), a boolean (FULL-DATA-COVERAGE: boolean) indicating if the area is fully or partially imaged, and the file prefix. For SAR images the orbit (ASCENDING or DESCENDING) is also indicated.

The Sentinel-2 images were obtained from the Mediaeval 2019 Multimedia Satellite Task [1] and are provided with Level 2A atmospheric correction. For one acquisition, there are 12 single-channel raster images provided corresponding to the different spectral bands.

The Sentinel-1 images were added to the dataset. The images are provided with radiometric calibration and range doppler terrain correction based on the SRTM digital elevation model. For one acquisition, two raster images are available corresponding to the polarimetry channels VV and VH.

The original dataset was split into 269 sequences for the train and 68 sequences for the test. Here all sequences are in the same folder.

 

To use this dataset please cite the following papers:

Flood Detection in Time Series of Optical and SAR Images, C. Rambour,N. Audebert,E. Koeniguer,B. Le Saux,  and M. Datcu, ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020, 1343--1346

The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop

 

This dataset contains modified Copernicus Sentinel data [2018-2019], processed by ESA.

[1] The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop

Categories:
3018 Views

Pages