The dataset is a new high-quality dataset to advance sea-land segmentation with high-resolution remote sensing images. The dataset contains 1,726 hand-labeled and cropped Gaofen-1 images with an 8-meter spatial resolution and 4 bands, covering the various types of coastlines in Lianyungang, China.

Instructions: 

The dataset is a new high-quality dataset to advance sea-land segmentation with high-resolution remote sensing images. The dataset contains 1,726 hand-labeled and cropped Gaofen-1 images with an 8-meter spatial resolution and 4 bands, covering the various types of coastlines in Lianyungang, China.

Categories:
74 Views

Wildfires are one of the deadliest and dangerous natural disasters in the world. Wildfires burn millions of forests and they put many lives of humans and animals in danger. Predicting fire behavior can help firefighters to have better fire management and scheduling for future incidents and also it reduces the life risks for the firefighters. Recent advance in aerial images shows that they can be beneficial in wildfire studies.

Instructions: 

The aerial pile burn detection dataset consists of different repositories. The first one is a raw video recorded using the Zenmuse X4S camera. The format of this file is MP4. The duration of the video is 966 seconds with a Frame Per Second (FPS) of 29. The size of this repository is 1.2 GB. The first video was used for the "Fire-vs-NoFire" image classification problem (training/validation dataset). The second one is a raw video recorded using the Zenmuse X4S camera. The duration of the video is 966 seconds with a Frame Per Second (FPS) of 29. The size of this repository is 503 MB. This video shows the behavior of one pile from the start of burning. The resolution of these two videos is 1280x720.

The third video is 89 seconds of heatmap footage of WhiteHot from the thermal camera. The size of this repository is 45 MB. The fourth one is 305 seconds of GreentHot heatmap with a size of 153 MB. The fifth repository is 25 mins of fusion heatmap with a size of 2.83 GB. All these three thermal videos are recorded by the FLIR Vue Pro R thermal camera with an FPS of 30 and a resolution of 640x512. The format of all these videos is MOV.

The sixth video is 17 mins long from the DJI Phantom 3 camera. This footage is used for the purpose of the "Fire-vs-NoFire" image classification problem (test dataset). The FPS is 30, the size is 32 GB, the resolution is 3840x2160, and the format is MOV.

The seventh repository is 39,375 frames that resized to 254x254 for the "Fire-vs-NoFire" image classification problem (Training/Validation dataset). The size of this repository is 1.3 GB and the format is JPEG.

The eighth repository is 8,617 frames that resized to 254x254 for the "Fire-vs-NoFire" image classification problem (Test dataset). The size of this repository is 301 MB and the format is JPEG.

The ninth repository is 2,003 fire frames with a resolution of 3480x2160 for the fire segmentation problem (Train/Val/Test dataset). The size of this repository is 5.3 GB and the format is JPEG.

The last repository is 2,003 ground truth mask frames regarding the fire segmentation problem. The resolution of each mask is 3480x2160. The size of this repository is 23.4 MB.

The published article is available here:

https://www.sciencedirect.com/science/article/pii/S1389128621001201

The preprint article of this dataset is available here:

https://arxiv.org/pdf/2012.14036.pdf

For more information please find the Table at: 

https://github.com/AlirezaShamsoshoara/Fire-Detection-UAV-Aerial-Image-Classification-Segmentation-UnmannedAerialVehicle

To find other projects and articles in our group:

https://www.cefns.nau.edu/~fa334/

Categories:
4191 Views

Here we introduce so-far the largest subject-rated database of its kind, namely, "Effect of Sugarcane vegetation on path-loss between CC2650 and CC2538 SoC 32-bit Arm Cortex-M3 based sensor nodes operating at 2.4 GHz Radio Frequency (RF)".

Categories:
187 Views

Here we introduce so-far the largest subject-rated database of its kind, namely, "Effect of Paddy Rice vegetation on path-loss between CC2650 SoC 32-bit Arm Cortex-M3 based sensor nodes operating at 2.4 GHz Radio Frequency (RF)". This database contains received signal strength measurements collected through campaigns in the IEEE 802.15.4 standard precision agricultural monitoring infrastructure developed for Paddy rice crop monitoring from period 03/07/2019 to 18/11/2019.

Categories:
228 Views

Here we introduce so-far the largest subject-rated database of its kind, namely, "Effect of Paddy Rice vegetation on received signal strength between CC2538 SoC 32-bit Arm Cortex-M3 based sensor nodes operating at 2.4 GHz Radio Frequency (RF)". This database contains received signal strength measurements collected through campaigns in the IEEE 802.15.4 standard precision agricultural monitoring infrastructure developed for Paddy Rice crop monitoring from period 01/07/2020 to 03/11/2020.

Categories:
111 Views

Here we introduce so-far the largest subject-rated database of its kind, namely, "Effect of Millet vegetation on path-loss between CC2538 SoC 32-bit Arm Cortex-M3 based sensor nodes operating at 2.4 GHz Radio Frequency (RF)". This database contains received signal strength measurements collected through campaigns in the IEEE 802.15.4 standard precision agricultural monitoring infrastructure developed for millet crop monitoring from period 03/06/2020 to 04/10/2020.

Categories:
101 Views

This dataset consists of orthorectified aerial photographs, LiDAR derived digital elevation models and segmentation maps with 10 classes, acquired through the open data program of the German state North Rhine-Westphalia (https://www.opengeodata.nrw.de/produkte/) and refined with OpenStreeMap. Please check the license information (http://www.govdata.de/dl-de/by-2-0).

Instructions: 

Dataset description

The data was mostly acquired over urban areas in North-Rhine Westphalia, Germany. Since the acquisition dates for the aerial photographs and LiDAR do not match exactly, there can be discrepancies in what they show and in which season, e.g., trees change their leaves or lose them in autumn. In our experience, these differences are not drastic but should be kept in mind.

We have included two Python scripts. plot_examples.py creates the example image used on this website. calc_and_plot_stats.py calculates and plots the class statistics. Furthermore, we published the code to create the dataset at https://github.com/gbaier/geonrw, which makes it easy to extend the dataset with other areas in North-Rhine Westphalia. The repository also contains a PyTorch data loader.

This multimodal dataset should be useful for a variety of tasks. Image segmentation using multiple inputs, height estimation from the aerial photographs, or semantic image synthesis.

Organization

Similar to the original source of the data (https://www.opengeodata.nrw.de/produkte/geobasis/lbi/dop/dop_jp2_f10_paketiert/), we organize all samples by the city they were acquired over. Their filenames, e.g., 345_5668_rgb.jp2 consists of the UTM zone 32N coordinates and the datatype (RGB, DEM or seg for land cover).

File formats

All data is geocoded and can be opened using QGIS (https://www.qgis.org/). The aerial photographs are stored as JPEG2000 files, the land cover maps and digital elevation models both as GeoTIFFs. The accompanying scripts show how to read the data into Python.

Categories:
404 Views

The simulated InSAR building dataset contains 312 simulated SAR image pairs generated from 39 different building models. Each building model is simulated at 8 viewing-angles. The sample number is 216 of the train set and is 96 of the test set. Each simulated InSAR sample contains three channels: master SAR image, slave SAR image, and interferometric phase image. This dataset serves the CVCMFF Net for building semantic segmentation of InSAR images.

Categories:
163 Views

The current maturity of autonomous underwater vehicles (AUVs) has made their deployment practical and cost-effective, such that many scientific, industrial and military applications now include AUV operations. However, the logistical difficulties and high costs of operating at-sea are still critical limiting factors in further technology development, the benchmarking of new techniques and the reproducibility of research results. To overcome this problem, we present a freely available dataset suitable to test control, navigation, sensor processing algorithms and others tasks.

Instructions: 

This repository contains the AURORA dataset, a multi sensor dataset for robotic ocean exploration.

It is accompanied by the report "AURORA, A multi sensor dataset for robotic ocean exploration", by Marco Bernardi, Brett Hosking, Chiara Petrioli, Brian J. Bett, Daniel Jones, Veerle Huvenne, Rachel Marlow, Maaten Furlong, Steve McPhail and Andrea Munafo.

Exemplar python code is provided at https://github.com/noc-mars/aurora.

 

The dataset provided in this repository includes data collected during cruise James Cook 125 (JC125) of the National Oceanography Centre, using the Autonomous Underwater Vehicle Autosub 6000. It is composed of two AUV missions: M86 and M86.

  • M86 contains a sample of multi-beam echosounder data in .all format. It also contains CTD and navigation data in .csv format.

  • M87 contains a sample of the camera and side-scan sonar data. The camera data contains 8 of 45320 images of the original dataset. The camera data are provided in .raw format (pixels are ordered in Bayer format). The size of each image is of size 2448x2048. The side-scan sonar folder contains a one ping sample of side-scan data provided in .xtf format.

  • The AUV navigation file is provided as part of the data available in each mission in .csv form.

 

The dataset is approximately 200GB in size. A smaller sample is provided at https://github.com/noc-mars/aurora_dataset_sample and contains a sample of about 200MB.

Each individual group of data (CTD, multibeam, side scan sonar, vertical camera) for each mission (M86, M87) is also available to be downloaded as a separate file. 

Categories:
394 Views

The files here support the analysis presented in the paper in IEEE Transactions on Geoscience and Remote Sensing, "Snow Property Inversion from Remote Sensing (SPIReS): A Generalized Multispectral Unmixing Approach with Examples from MODIS and Landsat 8 OLI" Spectral mixture analysis has a history in mapping snow, especially where mixed pixels prevail. Using multiple spectral bands rather than band ratios or band indices, retrievals of snow properties that affect its albedo lead to more accurate estimates than widely used age-based models of albedo evolution.

Instructions: 

These HDF5 files contain snow cover over the Sierra Nevada USA from water year 2001-2019 using the Snow Property Inversion from Remote Sensing (SPIRES) approach. Each file covers one water year (October through September). They are stored with block compression so individual days can be read without reading the whole file. The method is described by E.H. Bair, T. Stillinger, and J. Dozier, "Snow Property Inversion from Remote Sensing (SPIReS): A generalized multispectral unmixing approach with examples from MODIS and Landsat 8 OLI," IEEE Trans. Geosci. Remote Sens., 2020 (manuscript number TGRS-2020-02003). Source code is at https://github.com/edwardbair/SPIRES The projection is the Albers equaconic (also called the California Teale projection) with WGS84 datum and 500 m square pixels. The standard meridian for the projection is 120 W; the standard parallels are 34 N and 40.5 N; False Northing is -40,000,000. The h5 files can be read with several software packages. We use MATLAB. They contain: MATLAB date numbers, ISO dates in format YYYYDDD, geographic information, spacetime cubes of snow fraction, raw (unadjusted) snow fraction, grain size (um), and dust (ppmw). The spacetime cubes have a slice for each day, begin on October 1 and end on September 30.

Categories:
73 Views

Pages