This is a supplementary data file, providing the data used to evaluate the performance of our 3D fully convolutional neural network. This network removes reverberation noise from ultrasound channel data. This dataset is simulated ultrasound channel data, simulated in Field II Pro, and has artificial reverberation and thermal noise added. This dataset will be linked to our publication, once it is accepted. 

Categories:
26 Views

Holoscopic micro-gesture recognition (HoMG) database was recorded using a holoscopic 3D camera, which have 3 conventional gestures from 40 participants under different settings and conditions. The principle of holoscopic 3D (H3D) imaging mimics fly’s eye technique that captures a true 3D optical model of the scene using a microlens array. For the purpose of H3D micro-gesture recognition. HoMG database has two subsets. The video subset has 960 videos and the image subset has 30635 images, while both have three type of microgestures (classes).

Instructions: 

Holoscopic micro-gesture recognition (HoMG) database consists of 3 hand gestures: Button, Dial and Slider from 40 subjects with various ages and settings, which includes the right and left hand, two of record distance.

For video subset: There are 40 subjects, and each subject has 24 videos due to the different setting and three gestures. For each video, the frame rate is 25 frames per second and length of videos are from few seconds to 20 seconds and not equally. The whole dataset was divided into 3 parts. 20 subjects for the training set, 10 subjects for development set and another 10 subjects for testing set.

For image subset: Video can capture the motion information of the micro-gesture and it is a good way for micro-gesture recognition. From each video recording, the different number of frames were selected as the still micro-gesture images. The image resolution 1920 by 1080. In total, there are 30635 images selected. The whole dataset was split into three partitions: A Training, Development, and Testing partition. There are 15237 images in the training subsets of 20 participants with 8364 in close distance and 6853 in the far distance. There are 6956 images in the development subsets of 10 participants with 3077 in close distance and 3879 in far distance. There are 8442 images in the testing subsets of 10 participants with 3930 in close distance and 4512 in far distance.

Categories:
106 Views

 food recognition  

 

Instructions: 

The data consists of 222430 training and 55096 testing images belonging to 2 classes. For the preparation of this dataset, we used images from the existing image datasets of UECFOOD256, Caltech 256, Instagram Images, Flickr Image Dataset, Food101, Malaysian Food Dataset(gathered and crawled by us), Indoor Scene recognition Dataset, 15 scene dataset.

Please only cite our work, for Food/Non-Food detection, in case of classification problems on the individual datasets, please cite and use them.

Categories:
99 Views

We study the ability of neural networks to steer or control trajectories of dynamical systems on graphs, which we represent with neural ordinary differential equations (neural ODEs). To do so, we introduce a neural-ODE control (NODEC) framework and find that it can learn control signals that drive graph dynamical systems into desired target states. While we use loss functions that do not constrain the control energy, our results show that NODEC produces control signals that are highly correlated with optimal (or minimum energy) control signals.

Categories:
62 Views

In this work, physical parameter‐based modeling of small signal parameters for a metal‐semiconductor field‐effect transistor (MESFET) has been carried out as continuous functions of drain voltage, gate voltage, frequency, and gate width. For this purpose, a device simulator has been used to generate a big dataset of which the physical device parameters included material type, doping concentration and profile, contact type, gate length, gate width, and work function.

Categories:
48 Views

IEEE Access "A Process-aware memory compact-device model using long-short term memory"

Categories:
26 Views

Basil/Tulsi Plant is harvested in India because of some spiritual facts behind this plant,this plant is used for essential oil and pharmaceutical purpose. There are two types of Basil plants cultivated in India as Krushna Tulsi/Black Tulsi and Ram Tulsi/Green Tulsi.

Many of the investigator working on disease detection in Basil leaves where the following diseases occur

 1) Gray Mold

2) Basal Root Rot, Damping Off

 3) Fusarium Wilt and Crown Rot

Instructions: 

Basil/Tulsi Plant is harvested in India because of some spiritual facts behind this plant,this plant is used for essential oil and pharmaceutical purpose. There are two types of Basil plants cultivated in India as Krushna Tulsi/Black Tulsi and Ram Tulsi/Green Tulsi.

Many of the investigator working on disease detection in Basil leaves where the following diseases occur

 1) Gray Mold

2) Basal Root Rot, Damping Off

 3) Fusarium Wilt and Crown Rot

4) Leaf Spot

5) Downy Mildew

The Quality parameters (Healthy/Diseased) and also classification based on the texture and color of leaves. For the object detection purpose researcher using an algorithm like Yolo,  TensorFlow, OpenCV, deep learning, CNN

I had collected a dataset from the region Amravati, Pune, Nagpur Maharashtra state the format of the images is in .jpg.

Categories:
436 Views

Description:

This repository contains the datasets used as part of the OC2 lab's work on Student Performance prediction and student engagement prediction in eLearning environments using machine learning methods.

Categories:
229 Views

The PD-BioStampRC21 dataset provides data from a wearable sensor accelerometry studyconducted for studying activity, gait, tremor, and other motor symptoms in individuals with Parkinson's disease (PD).In addition to individuals with PD, the dataset also includes data for controls that also went through the same study protocol as the PD participants. Data were acquired using lightweight MC 10 BioStamp RC sensors (MC 10 Inc, Lexington, MA), five of which were attached to each participant for gathering data over a roughly two day interval.

Instructions: 

Users of the dataset should cite the following paper:

Jamie L. Adams, Karthik Dinesh, Christopher W. Snyder, Mulin Xiong, Christopher G. Tarolli, Saloni Sharma, E. Ray Dorsey, Gaurav Sharma, "A real-world study of wearable sensors in Parkinson’s disease". Submitted.

where an overview of the study protocol is also provided. Additional detail specific to the dataset and file naming conventions is provided here.

The dataset is comprised of two main components: (I) Sensor and UPDRS-assessment-task annotation data for each participant and (II) demographic and clinical assessment data for all participants. Each of these is described in turn below:

I) Sensor and UPDRS-assessment-task annotation data:

For each participant the sensor accelerometry  and UPDRS-assessment-task annotation data are provided as a zip file, for instance, ParticipantID018DataPDBioStampRC.zip for participant ID 018. Unzipping the file generates a folder with a name matching the participant ID, for example, 018, that contains the data organized as the following files. Times and timestamps are consistently reported in units of milliseconds starting from the instant of the earliest sensor recording (for the first sensor applied to the participant).

a) Accelerometer sensor data files (CSV) corresponding to the five different sensor placement locations, which are abbreviated as

   1) Trunk (chest)                  - abbreviated as "ch"

   2) Left anterior thigh           - abbreviated as "ll"

   3) Right anterior thigh        - abbreviated as "rl"

   4) Left anterior forearm      - abbreviated as "lh"

   5) Right anterior forearm    - abbreviated as "rh"

   Example file name for accelerometer sensor data files:

   "AbbreviatedSensorLocation"_ID"ParticipantID"Accel.csv

   E.g. ch_ID018Accel.csv, ll_ID018Accel.csv, rl_ID018Accel.csv, lh_ID018Accel.csv, and rh_ID018Accel.csv

   File format for the accelerometer sensor data files: Comprises of four columns that provide a timestamp for    each measurement and corresponding triaxial accelerometry relative to the sensor coordinate system.

   Column 1: "Timestamp (ms)"             - Time in milliseconds

   Column 2: "Accel X (g)"                      - Acceleration in X-direction (in units of g = 9.8 m/s^2)

   Column 3: "Accel Y (g)"                      - Acceleration in Y-direction (in units of g = 9.8 m/s^2)

   Column 4: "Accel Z (g)"                      - Acceleration in Z-direction (in units of g = 9.8 m/s^2)

b) Annotation file (CSV). This file provides tagging annotations for the sensor data that identify, via start and end timestamps,     the durations of various clinical assessments performed in the study.   

   Example file name for annotation file:

   AnnotID"ParticipantID".csv

   E.g. AnnotID018.csv 

    File format for the annotation file: Comprises of four columns

   Column 1: "Event Type"                      - List of in-clinic MDS-UPDRS assessments. Each assessment comprises of                                                                 two queries -  medication status and MDS-UPDRS assessment body locations

   Column 2: "Start Timestamp (ms)"     - Start timestamp for the MDS-UPDRS assessments

   Column 3: "Stop Timestamp (ms)"      - Stop timestamp for the MDS-UPDRS assessments

   Column 4: "Value"                               - Responses to the queries in Column 1 - medication status (OFF/ON) and                                                                  MDS-UPDRS assessment body locations (E.g. RIGHT HAND, NECK, etc.)

   II) Demographic and clinical assessment data

For all participants, the demographic and clinical assessment data are provided as a zip file "Clinic_DataPDBioStampRCStudy.zip". Unzipping the file generates a CSV file named Clinic_DataPDBioStampRCStudy.csv.

File format for the demographic and clinical assessment data file: Comprises of 19 columns

Column 1: "ID"                                                                              - Participant ID

Column 2: "Sex"                                                                            - Participant sex (Male/Female)

Column 3: "Status"                                                                        - Participant disease status (PD/Control)

Column 4: "Age"                                                                            - Participant age

Column 5: "updrs_3_17a"                                                              - Rest tremor amplitude (RUE - Right Upper Extremity)

Column 6: "updrs_3_17b"                                                              - Rest tremor amplitude (LUE - Left Upper Extremity)

Column 7: "updrs_3_17c"                                                              - Rest tremor amplitude (RLE - Right Lower Extremity)

Column 8: "updrs_3_17d"                                                              - Rest tremor amplitude (LLE - Right Lower Extremity)

Column 9: "updrs_3_17e"                                                              - Rest tremor amplitude (Lip/Jaw)

Column 10 - Column 14: "updrs_3_17a_off" - "updrs_3_17e_off"  - Rest tremor amplitude during OFF medication assessment                                                                                                         (ordering similar as that from Column 5 to Column 9)

Column 15 - Column 19: "updrs_3_17a_on" - "updrs_3_17e_on"   - Rest tremor amplitude during ON medication assessment

For details about different MDS-UPDRS assessments and scoring schemes, the reader is referred to:

Goetz, C. G. et al. Movement Disorder Society-sponsored revision of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS): scale presentation and clinimetric testing results. Mov Disord 23, 2129-2170, doi:10.1002/mds.22340 (2008)   

Categories:
278 Views

As part of the 2018 IEEE GRSS Data Fusion Contest, the Hyperspectral Image Analysis Laboratory and the National Center for Airborne Laser Mapping (NCALM) at the University of Houston are pleased to release a unique multi-sensor optical geospatial representing challenging urban land-cover land-use classification task. The data were acquired by NCALM over the University of Houston campus and its neighborhood on February 16, 2017 between 16:31 and 18:18 GMT.

Instructions: 

Data files, as well as training and testing ground truth are provided in the enclosed zip file.

Categories:
173 Views

Pages