This is the data for paper "Environmental Context Prediction for Lower Limb Prostheses with Uncertainty Quantification" published on IEEE Transactions on Automation Science and Engineering, 2020. DOI: 10.1109/TASE.2020.2993399. For more details, please refer to https://research.ece.ncsu.edu/aros/paper-tase2020-lowerlimb. 

Instructions: 

Seven able-bodied subjects and one transtibial amputee participated in this study. Subject_001 to Subject_007 are able-bodied participants and Subject_008 is a transtibial amputee.

 

Each folder in the subject_xxx.zip file has one continuous session of data with the following items: 

1. folder named "rpi_frames": the frames collected from the lower limb camera. Frame rate: 10 frames per second. 

2. folder named "tobii_frames": the frames collected from the on-glasses camera. Frame rate: 10 frames per second. 

3. labels_fps10.mat: synchronized terrain labels, gaze from the eye-tracking glasses, GPS coordinates, and IMU signals. 

3.1 cam_time: the timestamps for the videos, GPS, gazes, and labeled terrains (unit: second). 10Hz

3.2 imu_time: the timestamps for the IMU sensors (unit: second). 40Hz.

3.3 GPS: the GPS coordinates (latitude, longitude)

3.4 rpi_FrameIds, tobii_FrameIds: the frame ID for the lower-limb and on-glasses cameras respectively. The ids indicate the filenames in "rpi_frames" and "tobii_frames" respectively. 

3.5 rpi_IMUs, tobii_IMUs: the imu signals from the two devices. Columns: (accel_x,accel_y,accel_z,gyro_x,gyro_y,gyro_z)

3.6 terrains: the type of terrains the subjects are current on. Six terrains: tile, brick, grass, cement, upstairs, downstairs. "undefined" and "unlabelled" can be regarded as the same kind of data that needs to be deprecated.

 

The following sessions were collected during busy hours (many pedestrians were around):

'subject_005/01', 

'subject_005/02'

'subject_006/01', 

'subject_006/02', 

'subject_007/01', 

'subject_007/02', 

The following sessions were collected during non-busy hours (few pedestrians were around):

'subject_005/03', 

'subject_005/04',

'subject_006/03', 

'subject_006/04',

'subject_007/03', 

'subject_007/04',

'subject_008/01',

'subject_008/02'

The other sessions were collected without specific collecting hours (e.g. busy or non-busy). 

For the following sessions, the data collection devices were not optimized (e.g. non-optimal brightness balance). Thus, we recommend to use these sessions as training or validation dataset but not as testing data.

'subject_001/02'

'subject_003/01'

'subject_003/02'

'subject_003/03'

'subject_004/01'

'subject_004/02'

Categories:
714 Views

A composite dataset with eight videos (totaling the pronunciation of seventeen words, with intervals, sagittal plane, and gray scale), for experiments in computer vision, video processing, and articulation investigation of the vocal tract.

Instructions: 

In this dataset:- There is no audio.- Sagittal image- Grey Scale

Categories:
254 Views

Nextmed project is a software platform for the segmentation and visualization of medical images. It consist on a series of different automatic segmentation algorithms for different anatomical structures and  a platform for the visualization of the results as 3D models.

This dataset contains the .obj and .nrrd files that correspond to the results of applying our automatic lung segmentation algorithm to the LIDC-IDRI dataset.

This dataset relates to 718 of the 1012 LIDC-IDRI scans.

Instructions: 

The file consists in a folder for each result whith the .obj and .nrrd files generated by the Nextmed algorithms.

Categories:
452 Views

Dataset for Telugu Handwritten Gunintam

Categories:
432 Views

 

Dataset was created as part of joint efforts of two research groups from the University of Novi Sad, which were aimed towards development of vision based systems for automatic identification of insect species (in particular hoverflies) based on characteristic venation patterns in the images of the insects' wings.The set of wing images consists of high-resolution microscopic wing images of several hoverfly species. There is a total of 868 wing images of eleven selected hoverfly species from two different genera, Chrysotoxum and Melanostoma.

Instructions: 

 

## University of Novi Sad (UNS), Hoverflies classification dataset - ReadMe file

__________________________________________________________

Version 1.0

Published: December, 2014

by:

## Dataset authors:

* Zorica Nedeljković    (zoricaned14 a_t gmail.com), A1

* Jelena Ačanski    (jelena.acanski a_t dbe.uns.ac.rs), A1

* Marko Panić    (mpanic a_t uns.ac.rs), A2

* Ante Vujić    (ante.vujic a_t dbe.uns.ac.rs), A1

* Branko Brkljač    (brkljacb a_t uns.ac.rs), A2, *corr. auth.

 

Dataset was created as part of joint efforts of two research groups from the University of Novi Sad, which were aimed towards development of vision based systems for automatic identification of insect species (in particular hoverflies) based on characteristic venation patterns in the images of the insects' wings. At the time of dataset's development, authors affiliations were:

 * A1: Department of Biology and Ecology, Faculty of Sciences, University of Novi Sad, Trg Dositeja Obradovića 2, 21000 Novi Sad, Republic of Serbia

and

* A2: Department of Power, Electronic and Telecommunication Engineering, Faculty of Technical Sciences, University of Novi Sad, Trg Dositeja Obradovića 6, 21000 Novi Sad, Republic of Serbia

University of Novi Sad:   http://www.uns.ac.rs/index.php/en/

 

# Dataset description:

The set of wing images consists of high-resolution microscopic wing images of several hoverfly species. There is a total of 868 wing images of eleven selected hoverfly species from two different genera, Chrysotoxum and Melanostoma. 

The wings have been collected from many different geographic locations in the Republic of Serbia during a relatively long period of time of more than two decades. Wing images were obtained from the wing specimens mounted in the glass microscopic slides by a microscopic device equipped with a digital camera with image resolution of 2880 × 1550 pixels and were originally stored in the TIFF image format.

Each wing specimen was uniquely numbered and associated with the taxonomy group it belongs to. Association of eachwing with a particular species was based on the classification of the insect at the time when it was collected and beforethe wings were detached. This classification was done after examination by a skilled expert.  

In the next step, digital images were acquired by biologists, under a relatively uncontrolled conditions of nonuniform background illumination and variable scene configuration, and without camera calibration. In that sense, originally obtained digital images were not particularly suitable for exact measurements. Other shortcomings of the samples in the initial image dataset were result of variable wing specimens' quality, damaged or badly mounted wings, existence of artifacts, variable wing positions during image acquisitions, and dust.

In order to overcome these limitations and make images amenable to automatic discrimination of hoverflyspecies, they were first preprocessed. The preprocessing of each image consisted of image rotation to a unified horizontalposition, wing cropping, and subsequent scaling of the cropped wing image. Cropping eliminated unnecessary background containing artifacts, while the aspect ratio-preserving image scaling enabled overcoming of the problem of variable size among the wings of the same species. Described scaling was performed after computing average width and average height of all cropped images, which were then interpolated to the same width of 1680 pixels using bicubic interpolation. Given width value was selected based on the prevailing image width among the wing images of different species.

Wing images obtained in this way formed the final wing images dataset used for the sliding-window detector training, its performance evaluation, and subsequent hoverfly species discrimination using the trained landmark points detector, described in [1, 2].

* Besides images of the whole wings (in the folder "Wing images"), provided "UNS_Hoverflies" dataset also consists of the small image patches (64x64 pixels) corresponding to 18 predetermined landmark points in each wing, which were systematically collected and organized inside the second root folder named "Training - test set". Each patch among the "Patch_positives" was manually cropped from the preprocessed wing image (i.e. rotated, cropped and scaled to the same predefined image width). However, images of the whole wings that were stored in the folder "Wing images", are provided without additional scaling step in the preprocessing procedure, and correspond to wing images that were only rotated and cropped.

"Wing images" are organized in two subfolders named "disk_1" and "disk_2", which correspond to two DVD drives where they were initially stored. Each folder also comes with additional .xml file containing some metadata. In "Wing images", .xml files contain average spatial size of the images in the given folder, while in the "Training - test set", individual .xml files contain additional data about created image patches (in case of patches corresponding to landmark points, "Patch_positives", each .xml contains image intrinsic spatial coordinates of each landmark point, as well as additional data about the corresponding specimen - who created it, when and where it was gathered, taxonomy, etc. Landmark points have unique numeration from 1 to 18, also provided by figures in [1,2]. In case of "Patch_negatives", each subfolder named after wing identifier, e.g. "W0034_neg", contains 40 randomly selected image patches that correspond to any part of the preprocessed image excluding one of the 18 landmark points and their closest surrounding. Although image patches were generated for all species, only a subset of images corresponding to the species with the highest number of specimens was used in the original classification studies described in [1, 2]. However, in the present form "UNS_Hoverflies" dataset contains all initially processed wing images and image patches.

Besides previously described data, which are the main part of the dataset, repository also contains the original microscopic images of insects' wings, stored without any additional processing after acquisition. These files are available in the second .zip archive denoted by the suffix "unprocessed".

 

Directory structure:

UNS_Hoverflies_Dataset├── Training - test set│   ├── Patch_negatives│   ├── Patch_positives└── Wing images    ├── disk_1    └── disk_2

 

UNS_Hoverflies_Dataset_unprocessed│└── Unprocessed wing images    ├── disk_1    └── disk_2

 

# How to cite:

We would be glad if you intend to use this dataset. In such case, please consider to cite our work as:

BibTex:

@article{UNShoverfliesDataset2019,author = {Zorica Nedeljković and Jelena Ačanski and Marko Panić and Ante Vujić and Branko Brkljač},title = {University of Novi Sad (UNS), Hoverflies classification dataset},journal = {{IEEE} DataPort},year = {2019}} and/or any of the corresponding original publications:

## References:

[1] Branko Brkljač, Marko Panić, Dubravko Ćulibrk, Vladimir Crnojević, Jelena Ačanski, and Ante Vujić, “Automatic hoverfly species discrimination,” in Proceedings of the 1st International Conference on Pattern Recognition Applications and Methods, vol. 2, pp. 108–115, SciTePress, Vilamoura, 2012. https://dblp.org/db/conf/icpram/icpram2012-2.

[2] Vladimir Crnojević, Marko Panić, Branko Brkljač, Dubravko Ćulibrk, Jelena Ačanski, and Ante Vujić, “Image Processing Method for Automatic Discrimination of Hoverfly Species,” Mathematical Problems in Engineering, vol. 2014, Article ID 986271, 12 pages, 2014. https://doi.org/10.1155/2014/986271.

 

** This dataset is published on IEEE DataPort repository under CC BY-NC-SA 4.0 license by the authors (for more information please visit: https://creativecommons.org/licenses/by-nc-sa/4.0/).

Categories:
746 Views

Our Signing in the Wild dataset consists of various videos harvested from YouTube containing people signing in various sign languages and doing so in diverse settings, environments, under complex signer and camera motion, and even group signing. This dataset is intended to be used for sign language detection.

 

Categories:
509 Views

Emergency  managers  of  today  grapple  with  post-hurricane damage assessment that is often labor-intensive, slow,costly,   and   error-prone.   As   an   important   first   step   towards addressing  the   challenge,   this   paper   presents   the   development of  benchmark  datasets  to  enable  the  automatic  detection  ofdamaged buildings from post-hurricane remote sensing imagerytaken  from  both  airborne  and  satellite  sensors.  Our  work  has two  major  contributions:  (1)  we  propose  a  scalable  framework to  create  benchmark  datasets  of  hurricane-damaged  buildings

Instructions: 

Data can be used for object detection algorithms to properly annotate post disaster buildings as either damaged or non-damaged, aiding disaster response. This dataset contains ESRI Shapefiles of bounding boxes of buildings labeled as either damaged or non-damaged. Those labeled as damaged also have four degrees of damage from minor to catastrophic. Importantly, each bounding box is also indexed to one of the images in the NOAA post-Hurricane Harvey imagery dataset allowing users to match the bounding boxes with the correct imagery for training the algorithm. 

To make the NOAA imagery more manageable, images were processed and tiled into smaller 2048x2048 pixel ones. To obtain the same images please follow the steps below:

  1. Download the images from the NOAA page

  2. Tile the images using the tileTiff.py script (make sure the size is set to 2048 x 2048). All tiles will be in a subdirectory named “1”.

  3. This then creates the tiles that correspond to the image indexed in the shape files.

Important note: Not all bounding boxes in the shape file will map to an image. One will have to filter out bounding boxes which do not map to the images before feeding the data into any model.

Categories:
2895 Views

Pages