This Dataset contains "Pristine" and "Distorted" videos recorded in different places. The 

distortions with which the videos were recorded are: "Focus", "Exposure" and "Focus + Exposure". 

Those three with low (1), medium (2) and high (3) levels, forming a total of 10 conditions 

(including Pristine videos). In addition, distorted videos were exported in three different 

qualities according to the H.264 compression format used in the DIGIFORT software, which were: 

High Quality (HQ, H.264 at 100%), Medium Quality (MQ, H.264 at 75%) and Low Quality 

Instructions: 

  

0. This Dataset is intended to evaluate "Visual Quality Assessment" (VQA) and "Visual Object 

Tracking" (VOT) algorithms. It has 4476 videos with different distortions and their Bounding Box 

annotations ([x(x coordinate) y(y coordinate) w(width) h(height)]) for each frame. It also contains 

a MATLAB script which allows to generate the video sequences for VOT algorithms evaluation.

 

1. Move the "generateSequences.m" file to the "surveillanceVideosDataset" Folder.

 

2. Open the script and modify the next parameters according to your need:

 

%---------------------------------------------------------------%

                                                                              % 

%Sequence settings and images nomenclature   %

imagesType = '.jpg';                                              %

imgFolder = 'img';                                                 %  

gtName = 'groundtruth.txt';                                   %

imgNomenclature = ['%04d' imagesType];           %

                                                                             %

%--------------------------------------------------------------%

 

The last configuration will create a folder like this for each video:

 

0001SequenceExample (Folder)

- - img (Folder)

- - - - 0001.jpg (Image)

- - - - 0002.jpg (Image)

- - - - ....

- - - - ....

- - - - ....

- - - - 0451.jpg (Image)

- - groundtruth.txt (txt file: Bounding Box Annotations)

 

3. Press "Run" and wait until the sequences are built. The process can take a long time due to the 

number of videos. You will need 33 GB for the videos, 30 MB for the Bounding Box annotations and 230 

GB for the sequences (.jpg format).

 

--------------------------------------------------------------------------------------------------------------------------------------------

 

 

Categories:
514 Views

The PRIME-FP20 dataset is established for development and evaluation of retinal vessel segmentation algorithms in ultra-widefield (UWF) fundus photography (FP). PRIME-FP20 provides 15 high-resolution UWF FP images acquired using the Optos 200Tx camera (Optos plc, Dunfermline, United Kingdom), the corresponding labeled binary vessel maps, and the corresponding binary masks for the valid data region for the images. For each UWF FP image, a concurrently captured UWF fluorescein angiography (FA) is also included. 

Instructions: 

UWF FP images, UWF FA images, labeled UWF FP vessel maps, and binary UWF FP validity masks are provided, where the file names indicate the correspondence among them.

 

Users of the dataset should cite the following paper

L. Ding, A. E. Kuriyan, R. S. Ramchandran, C. C. Wykoff, and G. Sharma, ``Weakly-supervised vessel detection in ultra-widefield fundus photography via iterative multi-modal registration and learning,'' IEEE Trans. Medical Imaging, accepted for publication, to appear.

 

Categories:
1188 Views

Dataset asscociated with a paper in IEEE Transactions on Pattern Analysis and Machine Intelligence

"The perils and pitfalls of block design for EEG classification experiments"

DOI: 10.1109/TPAMI.2020.2973153

If you use this code or data, please cite the above paper.

Instructions: 

See the paper "The perils and pitfalls of block design for EEG classification experiments" on IEEE Xplore.

DOI: 10.1109/TPAMI.2020.2973153

Code for analyzing the dataset is included in the online supplementary materials for the paper.

The code and the appendix from the online supplementary materials are also included here.

If you use this code or data, please cite the above paper.

Categories:
991 Views

 

Instructions: 

The dataset is stored as a tarball (.tar.gz). Data can be extracted on most Linux systems using the command `tar -xzvf ASIs.tar.gz`. On MacOS systems the Archive Utility should be able to extract .tar.gz files by default. On Windows systems third-party software such as 7zip is available to extract tarballs. Alternatively the Windows Linux Subsystem can be used with the command `tar -xzvf ASIs.tar.gz`. 

 

Categories:
220 Views

[17-APR-2020: WE ARE STILL UPLOADING THE DATASET, PLEASE WAIT UNTIL IT IS COMPLETED] -The dataset comprises a set of 11 different actions performed by 17 subjects that is created for multimodal fall detection. Five types of falls and six daily activities were considered in the experiment. Data collection comes from five wearable sensors, one brainwave helmet sensor, six infrared sensors around the room and two RGB-cameras. Three attempts per action were recorded. The dataset contains raw signals as well as three windowing-based feature sets.

Instructions: 

We will upload the instructions in the following days.

Categories:
2357 Views

Cityscapes a new large-scale dataset that contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations of 5 000 frames in addition to a larger set of 20 000 weakly annotated frames. The dataset is thus an order of magnitude larger than similar previous attempts. Details on annotated classes and examples of our annotations are available at https://www.cityscapes-dataset.com/dataset-overview/#features.

 

Categories:
111 Views

Extracting the boundaries of Photovoltaic (PV) plants is essential in the process of aerial inspection and autonomous monitoring by aerial robots. This method provides a clear delineation of the utility-scale PV plants’ boundaries for PV developers, Operation and Maintenance (O&M) service providers for use in aerial photogrammetry, flight mapping, and path planning during the autonomous monitoring of PV plants. 

Categories:
865 Views

Detection results of the CircleNet with all test dataset with 1826 images

Categories:
85 Views

The study of mouse social behaviours has been increasingly undertaken in neuroscience research. However, automated quantification of mouse behaviours from the videos of interacting mice is still a challenging problem, where object tracking plays a key role in locating mice in their living spaces. Artificial markers are often applied for multiple mice tracking, which are intrusive and consequently interfere with the movements of mice in a dynamic environment.

Categories:
110 Views

A custom made multispectral camera was used to collect a novel dataset of images of untreated lettuce leaves or leaves treated with vinegar, oil, or a combination of these. The camera captured image data at 10 wavelengths ∈[380nm,980nm] across the electromagnetic spectrum in the visible and NIR (near-infrared) regions. Imaging was done in a lab environment with the presence of ambient light.

Instructions: 

 

Categories:
294 Views

Pages