Basil/Tulsi Plant is harvested in India because of some spiritual facts behind this plant,this plant is used for essential oil and pharmaceutical purpose. There are two types of Basil plants cultivated in India as Krushna Tulsi/Black Tulsi and Ram Tulsi/Green Tulsi.

Many of the investigator working on disease detection in Basil leaves where the following diseases occur

 1) Gray Mold

2) Basal Root Rot, Damping Off

 3) Fusarium Wilt and Crown Rot

Instructions: 

Basil/Tulsi Plant is harvested in India because of some spiritual facts behind this plant,this plant is used for essential oil and pharmaceutical purpose. There are two types of Basil plants cultivated in India as Krushna Tulsi/Black Tulsi and Ram Tulsi/Green Tulsi.

Many of the investigator working on disease detection in Basil leaves where the following diseases occur

 1) Gray Mold

2) Basal Root Rot, Damping Off

 3) Fusarium Wilt and Crown Rot

4) Leaf Spot

5) Downy Mildew

The Quality parameters (Healthy/Diseased) and also classification based on the texture and color of leaves. For the object detection purpose researcher using an algorithm like Yolo,  TensorFlow, OpenCV, deep learning, CNN

I had collected a dataset from the region Amravati, Pune, Nagpur Maharashtra state the format of the images is in .jpg.

Categories:
135 Views

Wildfires are one of the deadliest and dangerous natural disasters in the world. Wildfires burn millions of forests and they put many lives of humans and animals in danger. Predicting fire behavior can help firefighters to have better fire management and scheduling for future incidents and also it reduces the life risks for the firefighters. Recent advance in aerial images shows that they can be beneficial in wildfire studies.

Instructions: 

The aerial pile burn detection dataset consists of different repositories. The first one is a raw video recorded using the Zenmuse X4S camera. The format of this file is MP4. The duration of the video is 966 seconds with a Frame Per Second (FPS) of 29. The size of this repository is 1.2 GB. The first video was used for the "Fire-vs-NoFire" image classification problem (training/validation dataset). The second one is a raw video recorded using the Zenmuse X4S camera. The duration of the video is 966 seconds with a Frame Per Second (FPS) of 29. The size of this repository is 503 MB. This video shows the behavior of one pile from the start of burning. The resolution of these two videos is 1280x720.

The third video is 89 seconds of heatmap footage of WhiteHot from the thermal camera. The size of this repository is 45 MB. The fourth one is 305 seconds of GreentHot heatmap with a size of 153 MB. The fifth repository is 25 mins of fusion heatmap with a size of 2.83 GB. All these three thermal videos are recorded by the FLIR Vue Pro R thermal camera with an FPS of 30 and a resolution of 640x512. The format of all these videos is MOV.

The sixth video is 17 mins long from the DJI Phantom 3 camera. This footage is used for the purpose of the "Fire-vs-NoFire" image classification problem (test dataset). The FPS is 30, the size is 32 GB, the resolution is 3840x2160, and the format is MOV.

The seventh repository is 39,375 frames that resized to 254x254 for the "Fire-vs-NoFire" image classification problem (Training/Validation dataset). The size of this repository is 1.3 GB and the format is JPEG.

The eighth repository is 8,617 frames that resized to 254x254 for the "Fire-vs-NoFire" image classification problem (Test dataset). The size of this repository is 301 MB and the format is JPEG.

The ninth repository is 2,003 fire frames with a resolution of 3480x2160 for the fire segmentation problem (Train/Val/Test dataset). The size of this repository is 5.3 GB and the format is JPEG.

The last repository is 2,003 ground truth mask frames regarding the fire segmentation problem. The resolution of each mask is 3480x2160. The size of this repository is 23.4 MB.

The preprint article of this dataset is available here:

https://arxiv.org/pdf/2012.14036.pdf

For more information please find the Table at: 

https://github.com/AlirezaShamsoshoara/Fire-Detection-UAV-Aerial-Image-Classification-Segmentation-UnmannedAerialVehicle

To find other projects and articles in our group:

https://www.cefns.nau.edu/~fa334/

Categories:
1456 Views

Here we introduce so-far the largest subject-rated database of its kind, namely, "Effect of Sugarcane vegetation on path-loss between CC2650 and CC2538 SoC 32-bit Arm Cortex-M3 based sensor nodes operating at 2.4 GHz Radio Frequency (RF)".

Categories:
137 Views

Here we introduce so-far the largest subject-rated database of its kind, namely, "Effect of Paddy Rice vegetation on path-loss between CC2650 SoC 32-bit Arm Cortex-M3 based sensor nodes operating at 2.4 GHz Radio Frequency (RF)". This database contains received signal strength measurements collected through campaigns in the IEEE 802.15.4 standard precision agricultural monitoring infrastructure developed for Paddy rice crop monitoring from period 03/07/2019 to 18/11/2019.

Categories:
111 Views

NA

Dummy

Categories:
14 Views

Here we introduce so-far the largest subject-rated database of its kind, namely, "Effect of Paddy Rice vegetation on received signal strength between CC2538 SoC 32-bit Arm Cortex-M3 based sensor nodes operating at 2.4 GHz Radio Frequency (RF)". This database contains received signal strength measurements collected through campaigns in the IEEE 802.15.4 standard precision agricultural monitoring infrastructure developed for Paddy Rice crop monitoring from period 01/07/2020 to 03/11/2020.

Categories:
70 Views

Here we introduce so-far the largest subject-rated database of its kind, namely, "Effect of Millet vegetation on path-loss between CC2538 SoC 32-bit Arm Cortex-M3 based sensor nodes operating at 2.4 GHz Radio Frequency (RF)". This database contains received signal strength measurements collected through campaigns in the IEEE 802.15.4 standard precision agricultural monitoring infrastructure developed for millet crop monitoring from period 03/06/2020 to 04/10/2020.

Categories:
65 Views

The current maturity of autonomous underwater vehicles (AUVs) has made their deployment practical and cost-effective, such that many scientific, industrial and military applications now include AUV operations. However, the logistical difficulties and high costs of operating at-sea are still critical limiting factors in further technology development, the benchmarking of new techniques and the reproducibility of research results. To overcome this problem, we present a freely available dataset suitable to test control, navigation, sensor processing algorithms and others tasks.

Instructions: 

This repository contains the AURORA dataset, a multi sensor dataset for robotic ocean exploration.

It is accompanied by the report "AURORA, A multi sensor dataset for robotic ocean exploration", by Marco Bernardi, Brett Hosking, Chiara Petrioli, Brian J. Bett, Daniel Jones, Veerle Huvenne, Rachel Marlow, Maaten Furlong, Steve McPhail and Andrea Munafo.

Exemplar python code is provided at https://github.com/noc-mars/aurora.

 

The dataset provided in this repository includes data collected during cruise James Cook 125 (JC125) of the National Oceanography Centre, using the Autonomous Underwater Vehicle Autosub 6000. It is composed of two AUV missions: M86 and M86.

  • M86 contains a sample of multi-beam echosounder data in .all format. It also contains CTD and navigation data in .csv format.

  • M87 contains a sample of the camera and side-scan sonar data. The camera data contains 8 of 45320 images of the original dataset. The camera data are provided in .raw format (pixels are ordered in Bayer format). The size of each image is of size 2448x2048. The side-scan sonar folder contains a one ping sample of side-scan data provided in .xtf format.

  • The AUV navigation file is provided as part of the data available in each mission in .csv form.

 

The dataset is approximately 200GB in size. A smaller sample is provided at https://github.com/noc-mars/aurora_dataset_sample and contains a sample of about 200MB.

Each individual group of data (CTD, multibeam, side scan sonar, vertical camera) for each mission (M86, M87) is also available to be downloaded as a separate file. 

Categories:
217 Views

The dataset is composed of digital signals obtained from a capacitive sensor electrodes that are immersed in water or in oil. Each signal, stored in one row, is composed of 10 consecutive intensity values and a label in the last column. The label is +1 for a water-immersed sensor electrode and -1 for an oil-immersed sensor electrode. This dataset should be used to train a classifier to infer the type of material in which an electrode is immersed in (water or oil), given a sample signal composed of 10 consecutive values.

Instructions: 

The dataset is acquired from a capacitive sensor array composed of a set of sensor electrodes immersed in three different phases: air, oil, and water. It is composed of digital signals obtained from one electrode while it was immersed in the oil and water phases at different times. 

## Experimental setup

The experimental setup is composed of a capacitive sensor array that holds a set of sensing cells (electrodes) distributed vertically along the sensor body (PCB). The electrodes are excited sequentially and the voltage (digital) of each electrode is measured and recorded. The voltages of each electrode are converted to intensity values by the following equation:

intensity = ( |Measured Voltage - Base Voltage| / Base Voltage ) x 100

Where the Base Voltage is the voltage of the electrode recorded while the electrode is immersed in air. The intensity values are stored in the dataset instead of the raw voltage values.

## Experimental procedure 

The aim of the experiments is to get fixed-size intensity signals from one electrode (target electrode) when being immersed in water and oil; labeled as +1 (water) or -1 (oil). For this purpose, the following procedure was applied:

- The linear actuator was programmed to move the sensor up and down at a constant speed (20 mm / second).

- The actuator stops when reaching the upper and bottom positions for a fixed duration of time (60 seconds).

- At the upper position, the target electrode is immersed in oil; intensity signals are labeled -1 and sent to the PC.

- At the bottom position, the target electrode is immersed in water; intensity signals are labeled +1 and sent to the PC.

- The sampling rate is 100 msec; since each intensity signal contains 10 values, it takes 1 second to record one intensity signal 

## Environmental conditions

The experiments were perfomed under indoors laboratory conditions with room temperature of around 23 degree Celsius. 

## Dataset structure 

The signals included in the dataset are composed of intensity signals each with 10 consecutive values and a label in the last column. The label is +1 for a water-immersed electrode and -1 for an oil-immersed electrode.

## Application

The dataset should be used to train a classifier to differentiate between electrodes immersed in water and oil phases given a sample signal.

Categories:
489 Views

Radio-frequency noise mapping data collected from Downtown, Back Bay and North End neighborhoods within Boston, MA, USA in 2018 and 2019.

Instructions: 

Data consist of :
* distance, in meters, along the measurement path. This field is likely not useful for anyone other than the authors, but is included here for completeness.
* geographic location of the measurement, in decimal degrees, WGS84
* median external radio-frequency noise power, measured in a 1 MHz bandwidth about a center frequency of 142.0 MHz, in dBm
* peak external radio-frequency noise power, also measured in a 1 MHz bandwidth about a center frequency of 142.0 MHz, in dBm. Here, peak power is defined as the threshold where 99.99% of the data lie below this value.
* for North End and Back Bay datasets, the official zoning district containing the measurement location is included. Measurements in the Downtown data were all collected within Business and Mixed Use zoning districts, and thus are not listed.

Categories:
44 Views

Pages