Ten volunteers were trained through a series of twelve daily lessons to type in a computer using the Colemak keyboard layout. During the fourth-, eight-, and eleventh-session, electroencephalography (EEG) measurements were acquired for the five trials each subject performed in the corresponding lesson. Electrocardiography (ECG) data at each of those trials were acquired as well. The purpose of this experiment is to aim in the development of different methods to assess the process of learning a new task.


*Experimental setup

Ten volunteers were trained through a series of twelve daily lessons to type in a computer using the Colemak keyboard layout, which is an alternative to the QWERTY and Dvorak layouts, and it is designed for efficient and ergonomic touch typing in English. Six of our volunteers were female, four male, all of them were right-handed, and their mean age was 29.3 years old with an standard deviation of 5.7 years. The lessons used during our experiment are available on-line at colemak.com/Typing_lessons. In our case, we asked the volunteers to repeat each of them five times (with resting intervals of 2 min in between). We
chose Colemak touch typing as the ability to learn because most people are unaware of its existence, then it is a good candidate for a truly new ability to learn. The training process always took place in a sound-proof cubicle in which the volunteers were isolated from distractions. Hence, the volunteers were sitting in front of the computer and were engaged entirely in the typing lesson. All the experiments were carried at the same hour of the day, and all volunteers were asked to refrain of doing any additional training anywhere else. For more details, see [1].

*Data arrangement

A Matlab-compatible file is provided for each subject. Each .mat file contains a cell array (named Cn) of size 15x10, which corresponds to the 15 trials and 10 channels, respectively. Trials are organized as follows: rows 1-5 correspond to the measurements during the fourth Colemak lesson, rows 6-10 during the eighth, and rows 11-15 during the eleventh. Channels are organized by columns in the following order: (1) ECG, (2) F3, (3) Fz, (4) F4, (5) C3, (6) Cz, (7) C4, (8) P3, (9) POz, and (10) P4. Each of the elements of Cn correspond to a vector containing the output (time samples acquired at 256 Hz sampling frequency) of each of those channels. The lenght of each of those vectors differ between subjects, as well as for each trial depending on the time it took the corresponging subject to complete the Colemak lesson. The units of all output signals are microVolts.


All data has been preprocessed with the automatic decontamination algorithms provided by the B-Alert Live Software (BLS): raw signals are processed to eliminate known artifacts. Particularly, the following actions are taken for different type of artifacts:

• Excursions and amplifier saturation – contaminated periods are replaced with zero values, starting and ending at zero crossing before and after each event.
• Spikes caused by artifact are identified and signal value is interpolated.
• Eye Blinks (EOG) – wavelet transforms deconstruct the signal and a regression equation is used to identify the EEG regions contaminated with eye blinks. Representative EEG preceding the eye blink is inserted in the contaminated region.

Aditionally, all data were detrended using Matlab's command detrend.

*How to acknowledge

We encourage researchers to use the published dataset freely and we ask that they cite the respective data sources as well as this paper:

[1] D. Gutiérrez y M. A. Ramírez-Moreno, “Assessing a Learning Process with Functional ANOVA Estimators of EEG Power Spectral Densities,” Cognitive Neurodynamics, vol. 10, no. 2, pp. 175-183, 2016. DOI: 10.1007/s11571-015-9368-7


All data were acquired in the Laboratory of Biomedical Signal Processing, Cinvestav Monterrey, in the context of M. A. Ramírez-Moreno's MSc thesis work under the advice of D. Gutiérrez.



Participants were 61 children with ADHD and 60 healthy controls (boys and girls, ages 7-12). The ADHD children were diagnosed by an experienced psychiatrist to DSM-IV criteria, and have taken Ritalin for up to 6 months. None of the children in the control group had a history of psychiatric disorders, epilepsy, or any report of high-risk behaviors.




Extract the Zip files. Load the ".mat" data into MATLAB.


If you want to import the electrode location into EEGLAB, please use the attached".ced" file.







/* Style Definitions */
{mso-style-name:"Table Normal";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-bidi-font-family:"B Nazanin";}


This dataset provides the magneto-inertial signals from six MIMU (2 Xsens, 2 APDM, 2 Shimmer) and orientation from 8 reflective markers (VICON) at 3 different speeds (slow, medium, fast). Proprietary orientations from MIMU vendors are also included. All data are synchronized at 100 Hz.


The dataset comprises up to two weeks of activity data taken from the ankle and foot of 14 people without amputation and 17 people with lower limb amputation.  Walking speed, cadence, and lengths of strides taken at and away from the home were considered in this study.  Data collection came from two wearable sensors, one inertial measurement unit (IMU) placed on the top of the prosthetic or non-dominant foot, and one accelerometer placed on the same ankle.  Location information was derived from GPS and labeled as ‘home’, ‘away’, or ‘unknown’.  The dataset contains raw acce


This dataset is comprised of 31 Matlab .mat files. Each .mat file contains all sensor data for one individual participant. Files for participants with lower limb amputation (n = 17) are named as ‘S##.mat’ and files for control participants (n = 14) are named as ‘C##.mat’.


This dataset includes high-resolution (1 s) power and reactive power profiles of household appliances. The dataset consists of ground truth data from a European household, laboratory measurements and few artificial created data. Specifically, the dataset includes data for TV, washing machine, toaster, iron, hairdryer, dish washer, PC, refrigerator, air-conditioner unit, range, dryer, heat pump (different modes of operation), BEV, water heater, light bulb and always-on load profiles.


Instructions for the use of the dataset are given in the documentation attached 


Time Scale Modification (TSM) is a well-researched field; however, no effective objective measure of quality exists.  This paper details the creation, subjective evaluation, and analysis of a dataset for use in the development of an objective measure of quality for TSM. Comprised of two parts, the training component contains 88 source files processed using six TSM methods at 10 time scales, while the testing component contains 20 source files processed using three additional methods at four time scales.


When using this dataset, please use the following citation:

author = {Roberts,Timothy and Paliwal,Kuldip K. },
title = {A time-scale modification dataset with subjective quality labels},
journal = {The Journal of the Acoustical Society of America},
volume = {148},
number = {1},
pages = {201-210},
year = {2020},
doi = {10.1121/10.0001567},
URL = {https://doi.org/10.1121/10.0001567},
eprint = {https://doi.org/10.1121/10.0001567}


Audio files are named using the following structure: SourceName_TSMmethod_TSMratio_per.wav and split into multiple zip files.For 'TSMmethod', PV is the Phase Vocoder algorithm, PV_IPL is the Identity Phase Locking Phase Vocoder algorithm, WSOLA is the Waveform Similarity Overlap-Add algorithm, FESOLA is the Fuzzy Epoch Synchronous Overlap-Add algorithm, HPTSM is the Harmonic-Percussive Separation Time-Scale Modification algorithm and uTVS is the Mel-Scale Sub-Band Modelling Filterbank algorithm. Elastique is the z-Plane Elastique algorithm, NMF is the Non-Negative Matrix Factorization algorithm and FuzzyPV is the Phase Vocoder algorithm using Fuzzy Classification of Spectral Bins.TSM ratios range from 33% to 192% for training files, 20% to 200% for testing files and 22% to 220% for evaluation files.

  • Train: Contains 5280 processed files for training neural networks
  • Test: Contains 240 processed files for testing neural networks
  • Ref_Train: Contains the 88 reference files for the processed training files
  • Ref_Test: Contains the 20 reference files for the processed testing files
  • Eval: Contains 6000 processed files for evaluating TSM methods.  The 20 reference test files were processed at 20 time-scales using the following methods:
    • Phase Vocoder (PV)
    • Identity Phase-Locking Phase Vocoder (IPL)
    • Scaled Phase-Locking Phase Vocoder (SPL)
    • Phavorit IPL and SPL
    • Phase Vocoder with Fuzzy Classification of Spectral Bins (FuzzyPV)
    • Waveform Similarity Overlap-Add (WSOLA)
    • Epoch Synchronous Overlap-Add (ESOLA)
    • Fuzzy Epoch Synchronous Overlap-Add (FESOLA)
    • Driedger's Identity Phase-Locking Phase Vocoder (DrIPL)
    • Harmonic Percussive Separation Time-Scale Modification (HPTSM)
    • uTVS used in Subjective testing (uTVS_Subj)
    • updated uTVS (uTVS)
    • Non-Negative Matrix Factorization Time-Scale Modification (NMFTSM)
    • Elastique.


TSM_MOS_Scores.mat is a version 7 MATLAB save file and contains a struct called data that has the following fields:

  • test_loc: Legacy folder location of the test file.
  • test_name: Name of the test file.
  • ref_loc: Legacy folder location of reference file.
  • ref_name: Name of the reference file.
  • method: The method used for processing the file.
  • TSM: The time-scale ratio (in percent) used for processing the file. 100(%) is unity processing. 50(%) is half speed, 200(%) is double speed.
  • MeanOS: Normalized Mean Opinion Score.
  • MedianOS: Normalized Median Opinion Score.
  • std: Standard Deviation of MeanOS.
  • MeanOS_RAW: Mean Opinion Score before normalization.
  • MedianOS_RAW: Median Opinion Scores before normalization.
  • std_RAW: Standard Deviation of MeanOS before normalization.


TSM_MOS_Scores.csv is a csv containing the same fields as columns.

Source Code and method implementations are available at www.github.com/zygurt/TSM

Please Note: Labels for the files will be uploaded after paper publication.


Low light scenes often come with acquisition noise, which not only disturbs the viewers, but it also makes video compression harder. These type of videos are often encountered in cinema as a result of artistic perspective or the nature of a scene. Other examples include shots of wildlife (e.g. mobula rays at night in Blue Planet II), concerts and shows, surveillance camera footage and more. Inspired by all above, we are proposing a challenge on encoding low-light captured videos.

Last Updated On: 
Fri, 05/01/2020 - 09:40

This dataset was developed at the School of Electrical and Computer Engineering (ECE) at the Georgia Institute of Technology as part of the ongoing activities at the Center for Energy and Geo-Processing (CeGP) at Georgia Tech and KFUPM. LANDMASS stands for “LArge North-Sea Dataset of Migrated Aggregated Seismic Structures”. This dataset was extracted from the North Sea F3 block under the Creative Commons license (CC BY-SA 3.0).


The LANDMASS database includes two different datasets. The first, denoted LANDMASS-1, contains 17667 small “patches” of size 99x99 pixels. it includes 9385 Horizon patches, 5140 chaotic patches, 1251 Fault patches, and 1891 Salt Dome patches. The images in this database have values in the range [-1,1]. The second dataset, denoted LANDMASS-2, contains 4000 images. Each image is of size 150x300 pixels and normalized to values in the range [0,1]. Each one of the four classes has 1000 images. Sample images from each database for each class can be found under the /samples file.


Data from accelerometers mounted on ICE chassis.


Infrared imaging from aerial platforms can be used to detect landmines and minefields remotely and can save many lives. This dataset contains thermal images of buried and surface landmines. The images were recorded from a fixed camera for 24 hours with 15-minute intervals. DM-11 type anti-personnel landmines were used. This dataset is available for landmine detection research.


Instructions are given in the attached pdf file.