Wildfires are one of the deadliest and dangerous natural disasters in the world. Wildfires burn millions of forests and they put many lives of humans and animals in danger. Predicting fire behavior can help firefighters to have better fire management and scheduling for future incidents and also it reduces the life risks for the firefighters. Recent advance in aerial images shows that they can be beneficial in wildfire studies.

Instructions: 

The aerial pile burn detection dataset consists of different repositories. The first one is a raw video recorded using the Zenmuse X4S camera. The format of this file is MP4. The duration of the video is 966 seconds with a Frame Per Second (FPS) of 29. The size of this repository is 1.2 GB. The first video was used for the "Fire-vs-NoFire" image classification problem (training/validation dataset). The second one is a raw video recorded using the Zenmuse X4S camera. The duration of the video is 966 seconds with a Frame Per Second (FPS) of 29. The size of this repository is 503 MB. This video shows the behavior of one pile from the start of burning. The resolution of these two videos is 1280x720.

The third video is 89 seconds of heatmap footage of WhiteHot from the thermal camera. The size of this repository is 45 MB. The fourth one is 305 seconds of GreentHot heatmap with a size of 153 MB. The fifth repository is 25 mins of fusion heatmap with a size of 2.83 GB. All these three thermal videos are recorded by the FLIR Vue Pro R thermal camera with an FPS of 30 and a resolution of 640x512. The format of all these videos is MOV.

The sixth video is 17 mins long from the DJI Phantom 3 camera. This footage is used for the purpose of the "Fire-vs-NoFire" image classification problem (test dataset). The FPS is 30, the size is 32 GB, the resolution is 3840x2160, and the format is MOV.

The seventh repository is 39,375 frames that resized to 254x254 for the "Fire-vs-NoFire" image classification problem (Training/Validation dataset). The size of this repository is 1.3 GB and the format is JPEG.

The eighth repository is 8,617 frames that resized to 254x254 for the "Fire-vs-NoFire" image classification problem (Test dataset). The size of this repository is 301 MB and the format is JPEG.

The ninth repository is 2,003 fire frames with a resolution of 3480x2160 for the fire segmentation problem (Train/Val/Test dataset). The size of this repository is 5.3 GB and the format is JPEG.

The last repository is 2,003 ground truth mask frames regarding the fire segmentation problem. The resolution of each mask is 3480x2160. The size of this repository is 23.4 MB.

The published article is available here:

https://www.sciencedirect.com/science/article/pii/S1389128621001201

The preprint article of this dataset is available here:

https://arxiv.org/pdf/2012.14036.pdf

For more information please find the Table at: 

https://github.com/AlirezaShamsoshoara/Fire-Detection-UAV-Aerial-Image-Classification-Segmentation-UnmannedAerialVehicle

To find other projects and articles in our group:

https://www.cefns.nau.edu/~fa334/

Categories:
4192 Views

The LGP dataset (LGPSSD) consists of LGP samples collected from the industrial site through the image acquisition device of LGP defect detection system. In our dataset, NG samples are regarded as positive samples, and OK samples are regarded as negative samples.

Instructions: 

The LGP dataset (LGPSSD) consists of LGP samples collected from the industrial site through the image acquisition device of LGP defect detection system. In our dataset, NG samples are regarded as positive samples, and OK samples are regarded as negative samples. Each sample is a grayscale image with a size of 224 * 224 , and has two types of labels: One is the Mask label which is used to supervise the training process of the segmentation subnet, and the other is the classification label (NG corresponds to 1, and OK corresponds to 0), which is employed to supervise the training process of the decision subnet. The dataset totally contains 422 positive samples and 400 negative samples. 

Characteristics: The difference in density between the light guide point distribution of LGP images, the different size, shape and brightness of LGP defects.

Categories:
61 Views

This dataset consists of orthorectified aerial photographs, LiDAR derived digital elevation models and segmentation maps with 10 classes, acquired through the open data program of the German state North Rhine-Westphalia (https://www.opengeodata.nrw.de/produkte/) and refined with OpenStreeMap. Please check the license information (http://www.govdata.de/dl-de/by-2-0).

Instructions: 

Dataset description

The data was mostly acquired over urban areas in North-Rhine Westphalia, Germany. Since the acquisition dates for the aerial photographs and LiDAR do not match exactly, there can be discrepancies in what they show and in which season, e.g., trees change their leaves or lose them in autumn. In our experience, these differences are not drastic but should be kept in mind.

We have included two Python scripts. plot_examples.py creates the example image used on this website. calc_and_plot_stats.py calculates and plots the class statistics. Furthermore, we published the code to create the dataset at https://github.com/gbaier/geonrw, which makes it easy to extend the dataset with other areas in North-Rhine Westphalia. The repository also contains a PyTorch data loader.

This multimodal dataset should be useful for a variety of tasks. Image segmentation using multiple inputs, height estimation from the aerial photographs, or semantic image synthesis.

Organization

Similar to the original source of the data (https://www.opengeodata.nrw.de/produkte/geobasis/lbi/dop/dop_jp2_f10_paketiert/), we organize all samples by the city they were acquired over. Their filenames, e.g., 345_5668_rgb.jp2 consists of the UTM zone 32N coordinates and the datatype (RGB, DEM or seg for land cover).

File formats

All data is geocoded and can be opened using QGIS (https://www.qgis.org/). The aerial photographs are stored as JPEG2000 files, the land cover maps and digital elevation models both as GeoTIFFs. The accompanying scripts show how to read the data into Python.

Categories:
404 Views

Diabetic Retinopathy is the second largest cause of blindness in diabetic patients. Early diagnosis or screening can prevent the visual loss. Nowadays , several computer aided algorithms have been developed to detect the early signs of Diabetic Retinopathy ie., Microaneurysms. The AGAR300 dataset presented here facilitate the researchers for benchmarking MA detection algorithms using digital fundus images. Currently, we have released the first set of database which consists of 28 color fundus images, shows the signs of Microaneurysm.

Instructions: 

The files corresponding to the work reported in paper titled " A novel automated system of discriminating Microaneurysms in fundus images”. The images  are taken from Fundus photography machine with the resolution of 2448x3264. This dataset contains Diabetic Retinopathy images and users of this dataset should cite the following article.

 

D. Jeba Derwin, S. Tamil Selvi, O. Jeba Singh, B. Priestly Shan,”A novel automated system of discriminating Microaneurysms in fundus images”, Biomedical Signal Processing and Control,Vol.58, 2020, pages: 101839,ISSN 1746-8094, https://doi.org/10.1016/j.bspc.2019.101839.

(http://www.sciencedirect.com/science/article/pii/S1746809419304203)

Categories:
693 Views

"The friction ridge pattern is a 3D structure which, in its natural state, is not deformed by contact with a surface''. Building upon this rather trivial observation, the present work constitutes a first solid step towards a paradigm shift in fingerprint recognition from its very foundations. We explore and evaluate the feasibility to move from current technology operating on 2D images of elastically deformed impressions of the ridge pattern, to a new generation of systems based on full-3D models of the natural nondeformed ridge pattern itself.

Instructions: 

The present data release contains the data of 2 subjects of the 3D-FLARE DB.

 

These data is released as a sample of the complete database as these 2 subjects gave their specific consent for the distribution of their 3D fingerprint samples.

 

The acquisition system and the database are described in the article:

 

[ART1] J. Galbally, L. Beslay and G. Böstrom, "FLARE: A Touchless Full-3D Fingerprint Recognition System Based on Laser Sensing", IEEE ACCESS, vol. 8, pp. 145513-145534, 2020. 

DOI: 10.1109/ACCESS.2020.3014796.

 

We refer the reader to this article for any further details on the data.

 

This sample release contains the next folders:

 

- 1_rawData: it contains the 3D fingerprint samples as they were captured by the sensor describe in [ART1], with no processing. This folder includes the same 3D fingerprints in two different formats:

* MATformat: 3D fingerprints in MATLAB format

* PLYformat: 3D fingerprints in PLY format

 

- 2_processedData: it contains the 3D fingerprint samples after the two initial processing steps carried out before using the samples for recognition purposes. These files are in MATLAB format. This folder includes:

* 2a_Segmented: 3D fingerprints after being segemented according to the process described in Sect. V of [ART1]

* 2b_Detached: 3D fingerprints after being detached according to the process described in Sect. VI of [ART1]

 

The naming convention of the files is as follows: XXXX_AAY_SZZ

XXXX: 4 digit identifier for the user in the database

AA: finger identifier, it can take values: LI (Left Index), LM (Left Middle), RI (Right Index), RM (Right middle)

Y: sample number, with values 0 to 4

ZZ: acquisition speed, it can take values 10, 30 or 50 mm/sec

 

With the data files we also provide a series of example MATLAB scripts to visualise the 3D fingerprints:

readplytomatrix.m

showFilesPLYformat.m

showFilesRaw.m

showFilesSegmented.m

 

We cannot guarantee the correct functioning of these scripts depending on the MATLAB version you are running.

 

Two videos of the 3D fingerprint scanner can be checked at:

https://www.youtube.com/watch?v=XfbumvzXxnU

https://www.youtube.com/watch?v=2U6fqPIWzMg&t

Categories:
1913 Views

The dataset is genrated by the fusion of three publicly available datasets: COVID-19 cxr image (https://github.com/ieee8023/covid-chestxray-dataset), Radiological Society of North America (RSNA) (https://www.kaggle.com/c/rsna-pneumonia-detection-challenge), and U.S.  national  library  of  medicine  (USNLM) collected  Montgomery  country - NLM(MC) (http

Categories:
646 Views

We chose 8 publicly available CT volumes of COVID-19 positive patients which were available from https://doi.org/10.5281/zenodo.3757476 and used 3D slicer to generate volumetric annotations of 512*512 dimension for 5 lung lobes namely right upper lobe, right middle lobe, right lower lobe, left upper lobe and left lower lobe. These annotations are validated by a radiologist with over 15 years of experience. 

Instructions: 

CT volumes can be downloaded from https://doi.org/10.5281/zenodo.3757476

Volumetric annotations for 5 lobe segments namely right upper lobe, right middle lobe, right lower lobe, left upper lobe and left lower lobe are saved as segments 1 to 5 respectively. 

For scans with prefix coronacases_00x their corresponding annotations are uploaded with suffix lobes

The scans and annotations measure 512*512 and are in .nii format

Categories:
250 Views

The simulated InSAR building dataset contains 312 simulated SAR image pairs generated from 39 different building models. Each building model is simulated at 8 viewing-angles. The sample number is 216 of the train set and is 96 of the test set. Each simulated InSAR sample contains three channels: master SAR image, slave SAR image, and interferometric phase image. This dataset serves the CVCMFF Net for building semantic segmentation of InSAR images.

Categories:
163 Views

The current maturity of autonomous underwater vehicles (AUVs) has made their deployment practical and cost-effective, such that many scientific, industrial and military applications now include AUV operations. However, the logistical difficulties and high costs of operating at-sea are still critical limiting factors in further technology development, the benchmarking of new techniques and the reproducibility of research results. To overcome this problem, we present a freely available dataset suitable to test control, navigation, sensor processing algorithms and others tasks.

Instructions: 

This repository contains the AURORA dataset, a multi sensor dataset for robotic ocean exploration.

It is accompanied by the report "AURORA, A multi sensor dataset for robotic ocean exploration", by Marco Bernardi, Brett Hosking, Chiara Petrioli, Brian J. Bett, Daniel Jones, Veerle Huvenne, Rachel Marlow, Maaten Furlong, Steve McPhail and Andrea Munafo.

Exemplar python code is provided at https://github.com/noc-mars/aurora.

 

The dataset provided in this repository includes data collected during cruise James Cook 125 (JC125) of the National Oceanography Centre, using the Autonomous Underwater Vehicle Autosub 6000. It is composed of two AUV missions: M86 and M86.

  • M86 contains a sample of multi-beam echosounder data in .all format. It also contains CTD and navigation data in .csv format.

  • M87 contains a sample of the camera and side-scan sonar data. The camera data contains 8 of 45320 images of the original dataset. The camera data are provided in .raw format (pixels are ordered in Bayer format). The size of each image is of size 2448x2048. The side-scan sonar folder contains a one ping sample of side-scan data provided in .xtf format.

  • The AUV navigation file is provided as part of the data available in each mission in .csv form.

 

The dataset is approximately 200GB in size. A smaller sample is provided at https://github.com/noc-mars/aurora_dataset_sample and contains a sample of about 200MB.

Each individual group of data (CTD, multibeam, side scan sonar, vertical camera) for each mission (M86, M87) is also available to be downloaded as a separate file. 

Categories:
394 Views

Results(including reported and extra results) for LSstab. Please refer to our paper "Efficient real-time video stabilization with a novel least squares formulation and parallel AC-RANSAC".

Instructions: 

Stabilization results for LSstab. Please refer to our paper"Efficient real-time video stabilization with a novel

least squares formulation and parallel AC-RANSAC"

 

Stabilization results include:

(1) stabilized videos reported in the paper

(2) extra stabilized videos

(3) Challenging videos that LStab fails to stabilize. 

Categories:
39 Views

Pages