See our next articles.

Categories:
17 Views

Holoscopic micro-gesture recognition (HoMG) database was recorded using a holoscopic 3D camera, which have 3 conventional gestures from 40 participants under different settings and conditions. The principle of holoscopic 3D (H3D) imaging mimics fly’s eye technique that captures a true 3D optical model of the scene using a microlens array. For the purpose of H3D micro-gesture recognition. HoMG database has two subsets. The video subset has 960 videos and the image subset has 30635 images, while both have three type of microgestures (classes).

Instructions: 

Holoscopic micro-gesture recognition (HoMG) database consists of 3 hand gestures: Button, Dial and Slider from 40 subjects with various ages and settings, which includes the right and left hand, two of record distance.

For video subset: There are 40 subjects, and each subject has 24 videos due to the different setting and three gestures. For each video, the frame rate is 25 frames per second and length of videos are from few seconds to 20 seconds and not equally. The whole dataset was divided into 3 parts. 20 subjects for the training set, 10 subjects for development set and another 10 subjects for testing set.

For image subset: Video can capture the motion information of the micro-gesture and it is a good way for micro-gesture recognition. From each video recording, the different number of frames were selected as the still micro-gesture images. The image resolution 1920 by 1080. In total, there are 30635 images selected. The whole dataset was split into three partitions: A Training, Development, and Testing partition. There are 15237 images in the training subsets of 20 participants with 8364 in close distance and 6853 in the far distance. There are 6956 images in the development subsets of 10 participants with 3077 in close distance and 3879 in far distance. There are 8442 images in the testing subsets of 10 participants with 3930 in close distance and 4512 in far distance.

Categories:
99 Views

 food recognition  

 

Instructions: 

The data consists of 222430 training and 55096 testing images belonging to 2 classes. For the preparation of this dataset, we used images from the existing image datasets of UECFOOD256, Caltech 256, Instagram Images, Flickr Image Dataset, Food101, Malaysian Food Dataset(gathered and crawled by us), Indoor Scene recognition Dataset, 15 scene dataset.

Please only cite our work, for Food/Non-Food detection, in case of classification problems on the individual datasets, please cite and use them.

Categories:
98 Views

Basil/Tulsi Plant is harvested in India because of some spiritual facts behind this plant,this plant is used for essential oil and pharmaceutical purpose. There are two types of Basil plants cultivated in India as Krushna Tulsi/Black Tulsi and Ram Tulsi/Green Tulsi.

Many of the investigator working on disease detection in Basil leaves where the following diseases occur

 1) Gray Mold

2) Basal Root Rot, Damping Off

 3) Fusarium Wilt and Crown Rot

Instructions: 

Basil/Tulsi Plant is harvested in India because of some spiritual facts behind this plant,this plant is used for essential oil and pharmaceutical purpose. There are two types of Basil plants cultivated in India as Krushna Tulsi/Black Tulsi and Ram Tulsi/Green Tulsi.

Many of the investigator working on disease detection in Basil leaves where the following diseases occur

 1) Gray Mold

2) Basal Root Rot, Damping Off

 3) Fusarium Wilt and Crown Rot

4) Leaf Spot

5) Downy Mildew

The Quality parameters (Healthy/Diseased) and also classification based on the texture and color of leaves. For the object detection purpose researcher using an algorithm like Yolo,  TensorFlow, OpenCV, deep learning, CNN

I had collected a dataset from the region Amravati, Pune, Nagpur Maharashtra state the format of the images is in .jpg.

Categories:
393 Views

This file includes code and data of the paper named Dynamic radiomics: a new methodology to  extract quantitative time-related features from  tomographic images

Categories:
48 Views

As part of the 2018 IEEE GRSS Data Fusion Contest, the Hyperspectral Image Analysis Laboratory and the National Center for Airborne Laser Mapping (NCALM) at the University of Houston are pleased to release a unique multi-sensor optical geospatial representing challenging urban land-cover land-use classification task. The data were acquired by NCALM over the University of Houston campus and its neighborhood on February 16, 2017 between 16:31 and 18:18 GMT.

Instructions: 

Data files, as well as training and testing ground truth are provided in the enclosed zip file.

Categories:
143 Views

BTH Trucks in Aerial Images Dataset contains videos of 17 flights across two industrial harbors' parking spaces over two years.

Instructions: 

If you use these provided data in a publication or a scientific paper, please cite the dataset accordingly.

Categories:
194 Views

Pages