This dataset provides digital images and videos of surface ice conditions were collected from two Alberta rivers - North Saskatchewan River and Peace River - in the 2016-2017 winter seasons.
Images from North Saskatchewan River were collected using both Reconyx PC800 Hyperfire Professional game cameras mounted on two bridges in Edmonton as well as a Blade Chroma UAV equipped with a CGO3 4K camera at the Genesee boat launch.
Data for the Peace River was collected using only the UAV at the Dunvegan Bridge boat launch and Shaftesbury Ferry crossing.
Python code and instructions for using the dataset are available in this repository: https://github.com/abhineet123/river_ice_segmentation
The SWINSEG dataset contains 115 nighttime images of sky/cloud patches along with their corresponding binary ground truth maps The ground truth annotation was done in consultation with experts from Singapore Meteorological Services. All images were captured in Singapore using WAHRSIS, a calibrated ground-based whole sky imager, over a period of 12 months from January to December 2016. All image patches are 500x500 pixels in size, and were selected considering several factors such as time of the image capture, cloud coverage, and seasonal variations.
The paper describes different rainfall observing techniques available in the City of Genoa (Italy): the Ligurian regional tipping-bucket rain gauge (TBRG) network and the Monte Settepani long range weather radar (WR) operated by the Ligurian Regional Environmental Protection Agency (ARPAL) and Smart Rainfall System (SRS), a network of microwave sensors for satellite down-links developed by the University of Genoa and Artys srl.
Empirical line methods (ELM) are frequently used to correct images from aerial remote sensing. Remote sensing of aquatic environments captures only a small amount of energy because the water absorbs much of it. The small signal response of the water is proportionally smaller when compared to the other land surface targets.
This dataset presents some resources and results of a new approach to calibrate empirical lines combining reference calibration panels with water samples. We optimize the method using python algorithms until reaches the best result.
The files are identified sequentially according to the processing step:
- A1-img-nd_samples.xlsx: Digital numbers of water samples extract from the hyperspectral image
- A2-img-nd_targets.xlsx: Digital numbers of reference targets extract from the hyperspectral image
- B1-asd-rad_refl_targets.xlsx: Radiance values collected with ASD HandHeld of the reference targets and calculated Reflectance
- B2-asd-simulatedbands_refl.xlsx: Target reflectance values calculated and simulated to match the hyperspectral camera response function
- C1-trios-rad_refl_samples.xlsx: Radiance values collected with TriOS of the water points and calculated Reflectance
- C2-trios-simulatedbands_refl.xlsx: Water reflectance values calculated and simulated to match the hyperspectral camera response function
- D1-nd_data.csv: Digital number extracted from the hyperspectral image (CSV format, this is the input of the algorithm)
- D1-nd_data.xlsx: Digital number extracted from the hyperspectral image (xlsx format)
- D2-r_data.csv: Reflectance calculated from the spectroradiometers measurements (CSV format, this is the input of the algorithm)
- D2-r_data.xlsx: Reflectance calculated from the spectroradiometers measurements (xlsx format)
- D3-r_nd_targets.xlsx: Agregation from D1 and D2 data to compare the data
- E1-calc_coef_line.py: Python algorithm to calibrate and validate the empirical line model
- Fit.py: Python script class to calculate the Fit of linear and exponential function
- output_graphs.zip: The results of the graphs generated for each of the evaluated combinations. In this package are different graphical representations for each of the combinations of samples and targets, as well as for the exponential and linear fits.
All files of the output folder are self-explained, because the filename identifies how the ELM was calibrated.
Details and descriptions about the full process steps are in the official paper (under journal review).
The PS-InSAR analysis method is a technique that utilizes persistent scatter in SAR images and performs image analysis by interfering with 25 or more slave images in a master image. Determining the accuracy of the above algorithm is the denser between images, the higher the coherence, the more accurate the image is. Therefore, the Minimum Spanning Tree (MST) algorithm is used to find the optimum coherence by considering the temporal, spatial, and coherence of each image rather than Star graph, which interferes with the rest of the slave images in one master image.
This dataset accompanies the IEEE Journal of Oceanic Engineering Special Issue on Verification and Validation of Airgun Source Signature and Sound Propagation Models. The special issue has is its origins in the International Airgun Modelling Workshop (IAMW) held in Dublin, Ireland, on 16 July 2016 (Ainslie et al., 2016).
This dataset is a companion to a paper, "Segmentation Convolutional Neural Networks for Automatic Crater Detection on Mars" by DeLatte et al. 2019. DOI link: http://dx.doi.org/10.1109/JSTARS.2019.2918302
These are the segmentation target files for the three targets described in the paper: solid filled, thicker edge, and thinner edge.
These files match with the tiles that can be downloaded from the THEMIS Daytime IR Global Mosaic: http://www.mars.asu.edu/data/thm_dir/
Alternatively, this directory can be used for the download: http://www.mars.asu.edu/data/thm_dir/large/
Use this file pattern to grab the tiles:
- 0 to +30N: thm_dir_N00_*.png
-30N to 0: thm_dir_N-30_*.png
Included here are three targets for the 24 tiles ±30º latitude, 0-360º longitude. (Each tile is 30º by 30º, 7680 x 7680 pixels, and has a resolution of 256 pixels per degree). Craters with 2-32km radius are included, as identified by the Robbins & Hynek global Mars dataset (http://craters.sjrdesign.net/). The original data file for the crater locations and parameters can be found here: http://craters.sjrdesign.net/RobbinsCraterDatabase_20121016.tsv.zip
Any arbitrary range of segmentation crater targets can be created using the file and python OpenCV.
To use for segmentation, download the corresponding THEMIS Daytime IR Global Mosaic tiles and this dataset can be used as the target images for segmentation. The filenames of the target files will match the filenames in the THEMIS Daytime IR Global Mosaic.
The file names for each type match the following patterns:
- solid filled: thm_dir_N*_2_32_km_segrng.png
- thicker edge (8): thm_dir_N*_2_32_km_segrng_8_edge.png
- thinner edge (4): thm_dir_N*_2_32_km_segrng_4_edge.png
(segrng = segmentation range, referring to the 2-32 km radius range of craters in this dataset)
The numbers 4 and 8 above refer to the thickness parameter in python OpenCV. The circle drawing function is described here: https://docs.opencv.org/3.0-alpha/modules/imgproc/doc/drawing_functions....
This database is a image set of a strongest glint-affected region of inland water Capivara reservoir, Brazil. We carried out a flight survey in September 2016 on the confluence region of the Tibagi and Paranapanema Rivers. We use the hyperspectral camera manufactured by Rikola, model FPI2014, wich collect 25 spectral bands at following intervals and full widths at half maximum (FWHM), both expressed in nanometers (nm):
Each folder have specific resources generated on the processing steps. The generated resources, step by step, are:
1-roi_target.zip: ROI and river target shapefiles to delimit the process;
2-roitarget.zip: intersection of the ROI and river target;
3-imgs_roi.zip: Images clipped by the ROI target;
4-virtual_bands_None.zip: Virtual bandset generated using GDAL;
5ra-pixel_refs.zip: CSV file of mode values of each image band;
5rb-img_ref_fast_None.zip: Multscale image references generated by the author method proposes;
5rc-img_ref_gaussianmedian_None.zip: Local image references generated with Gaussian filter;
6ra-mosaics_refmodaNone.zip: Global reference mosaic;
6rb-mosaics_refDQNone.zip: Multiscale reference mosaic;
6rc-mosaics_ref_gaussianNone.zip: Local reference mosaic;
6sa-mosaic_first_None.zip: First value mosaic
6sb-mosaic_last_None.zip: Last value mosaic;
6sc-mosaic_mean_None.zip: Mean value mosaic;
6sd-mosaic_median_None.zip: Median value mosaic;
6se-mosaic_maximum_None.zip: Maximum value mosaic;
6sf-mosaic_minimum_None.zip: Minimum value mosaic;
Details and descriptions about the full process steps are in the oficial paper (under journal review).
After a hurricane, damage assessment is critical to emergency managers and first responders so that resources can be planned and allocated appropriately. One way to gauge the damage extent is to detect and quantify the number of damaged buildings, which is traditionally done through driving around the affected area. This process can be labor intensive and time-consuming. In this paper, utilizing the availability and readiness of satellite imagery, we propose to improve the efficiency and accuracy of damage detection via image classification algorithms.
To extract the dataset, please unzip the main file 'Post-hurricane.zip'. There will be 4 folders inside:
- train_another : the training data; 5000 images of each class
- validation_another: the validation data; 1000 images of each class
- test_another : the unbalanced test data; 8000/1000 images of damaged/undamaged classes
- test : the balanced test data; 1000 images of each class
All images are in JPEG format, the class label is the name of the super folder containing the images
The Xuzhou dataset was collected by an airborne HYSPEX hyperspectral camera over the Xuzhou peri-urban site in November 2014. This dataset consists of 500 × 260 pixels, with a very high spatial resolution of 0.73 m/pixel. The number of spectral bands used in the experiment was 436, after removing the noisy bands ranging from 415 nm to 2508 nm.