Abstract: Recent advances in computer vision and deep learning have allowed researchers to develop environment recognition systems for control of robotic leg prostheses and exoskeletons. However, small-scale and private training datasets have impeded the widespread development and dissemination of image classification algorithms (e.g., convolutional neural networks) for recognizing human walking environments.

Instructions: 

*Details on the ExoNet database are provided in the references above. Please email Brokoslaw Laschowski (blaschow@uwaterloo.ca) for any additional questions and/or technical assistance. 

Categories:
2894 Views

Research on damage detection of road surfaces has been an active area of research, but most studies have focused so far on the detection of the presence of damages. However, in real-world scenarios, road managers need to clearly understand the type of damage and its extent in order to take effective action in advance or to allocate the necessary resources. Moreover, currently there are few uniform and openly available road damage datasets, leading to a lack of a common benchmark for road damage detection.

Categories:
2036 Views

The 2020 Data Fusion Contest, organized by the Image Analysis and Data Fusion Technical Committee (IADF TC) of the IEEE Geoscience and Remote Sensing Society (GRSS) and the Technical University of Munich, aims to promote research in large-scale land cover mapping based on weakly supervised learning from globally available multimodal satellite data. The task is to train a machine learning model for global land cover mapping based on weakly annotated samples.

Last Updated On: 
Mon, 01/25/2021 - 09:03

 

Instructions: 

The data in xssed.csv comes from XSSed(http://www.XSSed.com)

The data in normal_example.csv from DMOZ(http://www.dmoztools.net/)

Data are URL formed. IP address and domain name are all removed.

Categories:
1143 Views

This dataset comes up as a benchmark dataset for machines to automatically recognizing the handwritten assamese digists (numerals) by extracting useful features by analyzing the structure. The Assamese language comprises of a total of 10 digits from 0 to 9. We have collected a total of 516 handwritten digits from 52 native assamese people irrespective of their age (12-86 years), gender, educational background etc. The digits are captured in .jpeg format using a paint mobile application developed by us which automatically saves the images in the internal storage of the mobile.

Categories:
604 Views

An accurate and reliable image-based quantification system for blueberries may be useful for the automation of harvest management. It may also serve as the basis for controlling robotic harvesting systems. Quantification of blueberries from images is a challenging task due to occlusions, differences in size, illumination conditions and the irregular amount of blueberries that can be present in an image. This paper proposes the quantification per image and per batch of blueberries in the wild, using high definition images captured using a mobile device.

Categories:
1314 Views

In order to increase the diversity in signal datasets, we create a new dataset called HisarMod, which includes 26 classes and 5 different modulation families passing through 5 different wireless communication channel. During the generation of the dataset, MATLAB 2017a is employed for creating random bit sequences, symbols, and wireless fading channels. 

 

Instructions: 

Documentation will be available soon.

Categories:
2118 Views

As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on the robustness of data-driven algorithms under diverse challenging conditions where trained models can possibly be depolyed. To achieve this goal, we introduced a large-sacle (~1.72M frames) traffic sign detection video dataset (CURE-TSD) which is among the most comprehensive datasets with controlled synthetic challenging conditions. The video sequences in the 

Instructions: 

The name format of the video files are as follows: “sequenceType_sequenceNumber_challengeSourceType_challengeType_challengeLevel.mp4”

·         sequenceType: 01 – Real data 02 – Unreal data

·         sequenceNumber: A number in between [01 – 49]

·         challengeSourceType: 00 – No challenge source (which means no challenge) 01 – After affect

·         challengeType: 00 – No challenge 01 – Decolorization 02 – Lens blur 03 – Codec error 04 – Darkening 05 – Dirty lens 06 – Exposure 07 – Gaussian blur 08 – Noise 09 – Rain 10 – Shadow 11 – Snow 12 – Haze

·         challengeLevel: A number in between [01-05] where 01 is the least severe and 05 is the most severe challenge.

Test Sequences

We split the video sequences into 70% training set and 30% test set. The sequence numbers corresponding to test set are given below:

[01_04_x_x_x, 01_05_x_x_x, 01_06_x_x_x, 01_07_x_x_x, 01_08_x_x_x, 01_18_x_x_x, 01_19_x_x_x, 01_21_x_x_x, 01_24_x_x_x, 01_26_x_x_x, 01_31_x_x_x, 01_38_x_x_x, 01_39_x_x_x, 01_41_x_x_x, 01_47_x_x_x, 02_02_x_x_x, 02_04_x_x_x, 02_06_x_x_x, 02_09_x_x_x, 02_12_x_x_x, 02_13_x_x_x, 02_16_x_x_x, 02_17_x_x_x, 02_18_x_x_x, 02_20_x_x_x, 02_22_x_x_x, 02_28_x_x_x, 02_31_x_x_x, 02_32_x_x_x, 02_36_x_x_x]

The videos with all other sequence numbers are in the training set. Note that “x” above refers to the variations listed earlier.

The name format of the annotation files are as follows: “sequenceType_sequenceNumber.txt“

Challenge source type, challenge type, and challenge level do not affect the annotations. Therefore, the video sequences that start with the same sequence type and the sequence number have the same annotations.

·         sequenceType: 01 – Real data 02 – Unreal data

·         sequenceNumber: A number in between [01 – 49]

The format of each line in the annotation file (txt) should be: “frameNumber_signType_llx_lly_lrx_lry_ulx_uly_urx_ury”. You can see a visual coordinate system example in our GitHub page.

·         frameNumber: A number in between [001-300]

·         signType: 01 – speed_limit 02 – goods_vehicles 03 – no_overtaking 04 – no_stopping 05 – no_parking 06 – stop 07 – bicycle 08 – hump 09 – no_left 10 – no_right 11 – priority_to 12 – no_entry 13 – yield 14 – parking

Categories:
1942 Views

As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on the robustness of data-driven algorithms under diverse challenging conditions where trained models can possibly be depolyed.

Instructions: 

The name format of the provided images are as follows: "sequenceType_signType_challengeType_challengeLevel_Index.bmp"

  • sequenceType: 01 - Real data 02 - Unreal data

  • signType: 01 - speed_limit 02 - goods_vehicles 03 - no_overtaking 04 - no_stopping 05 - no_parking 06 - stop 07 - bicycle 08 - hump 09 - no_left 10 - no_right 11 - priority_to 12 - no_entry 13 - yield 14 - parking

  • challengeType: 00 - No challenge 01 - Decolorization 02 - Lens blur 03 - Codec error 04 - Darkening 05 - Dirty lens 06 - Exposure 07 - Gaussian blur 08 - Noise 09 - Rain 10 - Shadow 11 - Snow 12 - Haze

  • challengeLevel: A number in between [01-05] where 01 is the least severe and 05 is the most severe challenge.

  • Index: A number shows different instances of traffic signs in the same conditions.

Categories:
1308 Views

This dataset includes all letters from Turkish Alphabet in two parts. In the first part, the dataset was categorized by letters, and the second part dataset was categorized by fonts. Both parts of dataset includes the features mentioned below.

  • 72, 20 AND 8 POINT LETTERS
  • UPPER AND LOWER CASES

The all characters in Turkish Alphabet are included (a, b, c, ç, d, e, f, g, ğ, h, ı, i, j, k, l, m, n, o, ö, p, r, s, ş, t, u, ü, v, y, z).

Categories:
669 Views

Pages