In order to increase the diversity in signal datasets, we create a new dataset called HisarMod, which includes 26 classes and 5 different modulation families passing through 5 different wireless communication channel. During the generation of the dataset, MATLAB 2017a is employed for creating random bit sequences, symbols, and wireless fading channels. 

 

Instructions: 

Documentation will be available soon.

Categories:
797 Views

Oxygen is one of the most adverse gas that contaminates sterile drugs in glass medicine bottle, so it is of great significance to detect oxygen concentration for glass medicine bottle in order to ensure the asepsis of drug and the stability of ingredients. Wavelength modulation spectroscopy (WMS) is applied to achieve online oxygen concentration detection by the single-line spectrum analysis for the advantages of non-contact and high sensitivity.

Categories:
63 Views

7200 .csv files, each containing a 10 kHz recording of a 1 ms lasting 100 hz sound, recorded centimeterwise in a 20 cm x 60 cm locating range on a table. 3600 files (3 at each of the 1200 different positions) are without an obstacle between the loudspeaker and the microphone, 3600 RIR recordings are affected by the changes of the object (a book). The OOLA is initially trained offline in batch mode by the first instance of the RIR recordings without the book. Then it learns online in an incremental mode how the RIR changes by the book.

Instructions: 

folder 'load and preprocess offline data': matlab sourcecodes and raw/working offline (no additional obstacle) data files

folder 'lvq and kmeans test': matlab sourcecodes to test and compare in-sample failure with and without LVQ

folder 'online data load and preprocess': matlab sourcecodes and raw/working online (additional obstacle) data files

folder 'OOL': matlab sourcecodes configurable for case 1-4

folder 'OOL2': matlab sourcecodes for case 5

folder 'plots': plots and simulations

Categories:
276 Views

The purpose of this challenge is to provide standardization of methods for assessing and benchmarking deep learning approaches to ultrasound image formation from ultrasound channel data that will live beyond the challenge.

Last Updated On: 
Wed, 08/12/2020 - 11:35

As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on the robustness of data-driven algorithms under diverse challenging conditions where trained models can possibly be depolyed. To achieve this goal, we introduced a large-sacle (1.M images) object recognition dataset (CURE-OR) which is among the most comprehensive datasets with controlled synthetic challenging conditions. In CURE

Instructions: 

 

 

Image name format : 

"backgroundID_deviceID_objectOrientationID_objectID_challengeType_challengeLevel.jpg"

 

backgroundID: 

1: White 2: Texture 1 - living room 3: Texture 2 - kitchen 4: 3D 1 - living room 5: 3D 2 – office

 

 

objectOrientationID: 

1: Front (0 º) 2: Left side (90 º) 3: Back (180 º) 4: Right side (270 º) 5: Top

 

 

objectID:

 1-100

 

 

challengeType: 

No challenge 02: Resize 03: Underexposure 04: Overexposure 05: Gaussian blur 06: Contrast 07: Dirty lens 1 08: Dirty lens 2 09: Salt & pepper noise 10: Grayscale 11: Grayscale resize 12: Grayscale underexposure 13: Grayscale overexposure 14: Grayscale gaussian blur 15: Grayscale contrast 16: Grayscale dirty lens 1 17: Grayscale dirty lens 2 18: Grayscale salt & pepper noise

challengeLevel: 

A number between [0, 5], where 0 indicates no challenge, 1 the least severe and 5 the most severe challenge. Challenge type 1 (no challenge) and 10 (grayscale) has a level of 0 only. Challenge types 2 (resize) and 11 (grayscale resize) has 4 levels (1 through 4). All other challenges have levels 1 to 5.

 

 

 

Categories:
334 Views

As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on the robustness of data-driven algorithms under diverse challenging conditions where trained models can possibly be depolyed. To achieve this goal, we introduced a large-sacle (~1.72M frames) traffic sign detection video dataset (CURE-TSD) which is among the most comprehensive datasets with controlled synthetic challenging conditions. The video sequences in the 

Instructions: 

The name format of the video files are as follows: “sequenceType_sequenceNumber_challengeSourceType_challengeType_challengeLevel.mp4”

·         sequenceType: 01 – Real data 02 – Unreal data

·         sequenceNumber: A number in between [01 – 49]

·         challengeSourceType: 00 – No challenge source (which means no challenge) 01 – After affect

·         challengeType: 00 – No challenge 01 – Decolorization 02 – Lens blur 03 – Codec error 04 – Darkening 05 – Dirty lens 06 – Exposure 07 – Gaussian blur 08 – Noise 09 – Rain 10 – Shadow 11 – Snow 12 – Haze

·         challengeLevel: A number in between [01-05] where 01 is the least severe and 05 is the most severe challenge.

Test Sequences

We split the video sequences into 70% training set and 30% test set. The sequence numbers corresponding to test set are given below:

[01_04_x_x_x, 01_05_x_x_x, 01_06_x_x_x, 01_07_x_x_x, 01_08_x_x_x, 01_18_x_x_x, 01_19_x_x_x, 01_21_x_x_x, 01_24_x_x_x, 01_26_x_x_x, 01_31_x_x_x, 01_38_x_x_x, 01_39_x_x_x, 01_41_x_x_x, 01_47_x_x_x, 02_02_x_x_x, 02_04_x_x_x, 02_06_x_x_x, 02_09_x_x_x, 02_12_x_x_x, 02_13_x_x_x, 02_16_x_x_x, 02_17_x_x_x, 02_18_x_x_x, 02_20_x_x_x, 02_22_x_x_x, 02_28_x_x_x, 02_31_x_x_x, 02_32_x_x_x, 02_36_x_x_x]

The videos with all other sequence numbers are in the training set. Note that “x” above refers to the variations listed earlier.

The name format of the annotation files are as follows: “sequenceType_sequenceNumber.txt“

Challenge source type, challenge type, and challenge level do not affect the annotations. Therefore, the video sequences that start with the same sequence type and the sequence number have the same annotations.

·         sequenceType: 01 – Real data 02 – Unreal data

·         sequenceNumber: A number in between [01 – 49]

The format of each line in the annotation file (txt) should be: “frameNumber_signType_llx_lly_lrx_lry_ulx_uly_urx_ury”. You can see a visual coordinate system example in our GitHub page.

·         frameNumber: A number in between [001-300]

·         signType: 01 – speed_limit 02 – goods_vehicles 03 – no_overtaking 04 – no_stopping 05 – no_parking 06 – stop 07 – bicycle 08 – hump 09 – no_left 10 – no_right 11 – priority_to 12 – no_entry 13 – yield 14 – parking

Categories:
543 Views

This data set contains 50 low resolution (640 x 360) short videos containing a variety real life activities.

Categories:
83 Views

As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on the robustness of data-driven algorithms under diverse challenging conditions where trained models can possibly be depolyed.

Instructions: 

The name format of the provided images are as follows: "sequenceType_signType_challengeType_challengeLevel_Index.bmp"

  • sequenceType: 01 - Real data 02 - Unreal data

  • signType: 01 - speed_limit 02 - goods_vehicles 03 - no_overtaking 04 - no_stopping 05 - no_parking 06 - stop 07 - bicycle 08 - hump 09 - no_left 10 - no_right 11 - priority_to 12 - no_entry 13 - yield 14 - parking

  • challengeType: 00 - No challenge 01 - Decolorization 02 - Lens blur 03 - Codec error 04 - Darkening 05 - Dirty lens 06 - Exposure 07 - Gaussian blur 08 - Noise 09 - Rain 10 - Shadow 11 - Snow 12 - Haze

  • challengeLevel: A number in between [01-05] where 01 is the least severe and 05 is the most severe challenge.

  • Index: A number shows different instances of traffic signs in the same conditions.

Categories:
367 Views

This folder contains two csv files and one .py file. One csv file contains NIST ground PV plant data imported from https://pvdata.nist.gov/. This csv file has 902 days raw data consisting PV plant POA irradiance, ambient temperature, Inverter DC current, DC voltage, AC current and AC voltage. Second csv file contains user created data. The Python file imports two csv files. The Python program executes four proposed corrupt data detection methods to detect corrupt data in NIST ground PV plant data.

Categories:
951 Views

Multi-modal Exercises Dataset is a multi- sensor, multi-modal dataset, implemented to benchmark Human Activity Recognition(HAR) and Multi-modal Fusion algorithms. Collection of this dataset was inspired by the need for recognising and evaluating quality of exercise performance to support patients with Musculoskeletal Disorders(MSD).The MEx Dataset contains data from 25 people recorded with four sensors, 2 accelerometers, a pressure mat and a depth camera.

Instructions: 

The MEx Multi-modal Exercise dataset contains data of 7 different physiotherapy exercises, performed by 30 subjects recorded with 2 accelerometers, a pressure mat and a depth camera.

Application

The dataset can be used for exercise recognition, exercise quality assessment and exercise counting, by developing algorithms for pre-processing, feature extraction, multi-modal sensor fusion, segmentation and classification.

 

Data collection method

Each subject was given a sheet of 7 exercises with instructions to perform the exercise at the beginning of the session. At the beginning of each exercise the researcher demonstrated the exercise to the subject, then the subject performed the exercise for maximum 60 seconds while being recorded with four sensors. During the recording, the researcher did not give any advice or kept count or time to enforce a rhythm.

 

Sensors

Obbrec Astra Depth Camera 

-       sampling frequency – 15Hz 

-       frame size – 240x320

 

Sensing Tex Pressure Mat

-       sampling frequency – 15Hz

-       frame size – 32*16

Axivity AX3 3-Axis Logging Accelerometer

-       sampling frequency – 100Hz

-       range – 8g

 

Sensor Placement

All the exercises were performed lying down on the mat while the subject wearing two accelerometers on the wrist and the thigh. The depth camera was placed above the subject facing down-words recording an aerial view. Top of the depth camera frame was aligned with the top of the pressure mat frame and the subject’s shoulders such that the face will not be included in the depth camera video.

 

Data folder

MEx folder has four folders, one for each sensor. Inside each sensor folder,

30 folders can be found, one for each subject. In each subject folder, 8 files can be found for each exercise with 2 files for exercise 4 as it is performed on two sides. (The user 22 will only have 7 files as they performed the exercise 4 on only one side.)  One line in the data files correspond to one timestamped and sensory data.

 

Attribute Information

 

The 4 columns in the act and acw files is organized as follows:

1 – timestamp

2 – x value

3 – y value

4 – z value

Min value = -8

Max value = +8

 

The 513 columns in the pm file is organized as follows:

1 - timestamp

2-513 – pressure mat data frame (32x16)

Min value – 0

Max value – 1

 

The 193 columns in the dc file is organized as follows:

1 - timestamp

2-193 – depth camera data frame (12x16)

 

dc data frame is scaled down from 240x320 to 12x16 using the OpenCV resize algorithm

Min value – 0

Max value – 1

Categories:
523 Views

Pages