This dataset presents the measurements corresponding to the article "Validation of a Velostat-Based Pressure Sensitive Mat for Center of Pressure Measurements". You will find the data corresponding to an affordable commercial mat, a Velostat-based mat prototype, and a commercial force platform. The results obtained in the above-mentioned article can be reproduced with them.

Instructions: 

In the dataset, every user has a folder. In the folder of each user there are six  subfolders with the name of the balance exercises, and in each subfolder there are several files: the file of the force platform (pasco.txt), the file of the commercial mat (.json) and the files of the prototype (raw: matVelo_file.txt, post-processed: .npy). The force platform files have two headers. The second header has the name of the columns including ‘Yc (cm)’ and ‘Xc (cm)’. The commercial mat files present a dictionary with the data of the 16x16 arrays ordered sequentially. Data can be accessed by means of a sequence of keys: ‘pressureData’, ‘n’, ‘pressureData’, ’i’, ‘j’ for n = 0,1, … and i,j contained in [0,15]. The prototype post-processing files contains the data as a numpy array (time,16,16). Finally, the prototype raw files contain the sequence of 16x16 array as a comma separated vector (256 numbers per row).If the folder of the code and the folder of the dataset are at the same level and the requirements has been installed, the code can be executed showing the summary table.

Categories:
152 Views

Ten volunteers were trained through a series of twelve daily lessons to type in a computer using the Colemak keyboard layout. During the fourth-, eight-, and eleventh-session, electroencephalography (EEG) measurements were acquired for the five trials each subject performed in the corresponding lesson. Electrocardiography (ECG) data at each of those trials were acquired as well. The purpose of this experiment is to aim in the development of different methods to assess the process of learning a new task.

Instructions: 

*Experimental setup

Ten volunteers were trained through a series of twelve daily lessons to type in a computer using the Colemak keyboard layout, which is an alternative to the QWERTY and Dvorak layouts, and it is designed for efficient and ergonomic touch typing in English. Six of our volunteers were female, four male, all of them were right-handed, and their mean age was 29.3 years old with an standard deviation of 5.7 years. The lessons used during our experiment are available on-line at colemak.com/Typing_lessons. In our case, we asked the volunteers to repeat each of them five times (with resting intervals of 2 min in between). We
chose Colemak touch typing as the ability to learn because most people are unaware of its existence, then it is a good candidate for a truly new ability to learn. The training process always took place in a sound-proof cubicle in which the volunteers were isolated from distractions. Hence, the volunteers were sitting in front of the computer and were engaged entirely in the typing lesson. All the experiments were carried at the same hour of the day, and all volunteers were asked to refrain of doing any additional training anywhere else. For more details, see [1].

*Data arrangement

A Matlab-compatible file is provided for each subject. Each .mat file contains a cell array (named Cn) of size 15x10, which corresponds to the 15 trials and 10 channels, respectively. Trials are organized as follows: rows 1-5 correspond to the measurements during the fourth Colemak lesson, rows 6-10 during the eighth, and rows 11-15 during the eleventh. Channels are organized by columns in the following order: (1) ECG, (2) F3, (3) Fz, (4) F4, (5) C3, (6) Cz, (7) C4, (8) P3, (9) POz, and (10) P4. Each of the elements of Cn correspond to a vector containing the output (time samples acquired at 256 Hz sampling frequency) of each of those channels. The lenght of each of those vectors differ between subjects, as well as for each trial depending on the time it took the corresponging subject to complete the Colemak lesson. The units of all output signals are microVolts.

*Preprocessing

All data has been preprocessed with the automatic decontamination algorithms provided by the B-Alert Live Software (BLS): raw signals are processed to eliminate known artifacts. Particularly, the following actions are taken for different type of artifacts:

• Excursions and amplifier saturation – contaminated periods are replaced with zero values, starting and ending at zero crossing before and after each event.
• Spikes caused by artifact are identified and signal value is interpolated.
• Eye Blinks (EOG) – wavelet transforms deconstruct the signal and a regression equation is used to identify the EEG regions contaminated with eye blinks. Representative EEG preceding the eye blink is inserted in the contaminated region.

Aditionally, all data were detrended using Matlab's command detrend.

*How to acknowledge

We encourage researchers to use the published dataset freely and we ask that they cite the respective data sources as well as this paper:

[1] D. Gutiérrez y M. A. Ramírez-Moreno, “Assessing a Learning Process with Functional ANOVA Estimators of EEG Power Spectral Densities,” Cognitive Neurodynamics, vol. 10, no. 2, pp. 175-183, 2016. DOI: 10.1007/s11571-015-9368-7

*Credits

All data were acquired in the Laboratory of Biomedical Signal Processing, Cinvestav Monterrey, in the context of M. A. Ramírez-Moreno's MSc thesis work under the advice of D. Gutiérrez.

Categories:
636 Views

We introduce a new database of voice recordings with the goal of supporting research on vulnerabilities and protection of voice-controlled systems (VCSs). In contrast to prior efforts, the proposed database contains both genuine voice commands and replayed recordings of such commands, collected in realistic VCSs usage scenarios and using modern voice assistant development kits.

Instructions: 

The corpus consists of three sets: the core, evaluation, and complete set. The complete set contains all the data (i.e., complete set = core set + evaluation set) and allows the user to freely split the training/test set. Core/evaluation sets suggest a default training/test split. For each set, all *.wav files are in the /data directory and the meta information is in meta.csv file. The protocol is described in the readme.txt. A PyTorch data loader script is provided as an example of how to use the data. A python resample script is provided for resampling the dataset into the desired sample rate.

Categories:
435 Views

This Dataset contains EEG recordings from epileptic rats. The genetic absence epilepsy rats (GAERS) are one of the best-established rodent models for generalized epilepsy. The rats show seizures with characteristic "spike and wave discharge" EEG patterns. Experiments were performed in accordance with the German law on animal protection and were approved by the Animal Care and Ethics Committee of the University of Kiel.

Instructions: 
  • Sample Frequency: 1600
  • Day1 (18:23:57-16:35:56): Three animals (R1, R2, R3): Array (data points x channels (3))
  • Day2 (16:42:53-16:52:06): Three animals (R1, R2, R3): Array (data points x channels (3))
  • Day3 (17:32:19-10:25:19): Three animals (R1, R2, R3): Array (data points x channels (3))
  • Day4 (10:26:40-14:46:13): Two animals (R1, R3): Array (data points x channels (3))
Categories:
314 Views

This dataset is accompanying the manuscript “Lossless Compression of Plenoptic Camera Sensor Images and of Light Field View Arrays” by Ioan Tabus and Emanuele Palma, submitted to IEEE Access in June 2020. It contains the archives and the programs for reconstructing the light field datasets publicly used in two major challenges for light field compression.

Instructions: 

This dataset is accompanying the manuscript “Lossless Compression of Plenoptic Camera Sensor Images and of Light Field View Arrays” by Ioan Tabus and Emanuele Palma, submitted to IEEE Access in June 2020. It contains the archives and the programs for reconstructing the light field datasets publicly used in two major challenges for light field compression.We propose a codec for lossless compression of plenoptic camera sensor images and then we embed the proposed codec into a full light field array codec, which encodes input sensor data and makes use specific plenoptic camera meta-information for creating lossless archives of light field view arrays. The sensor image codec takes the input lenslet image and splits it into rectangular patches, each patch corresponding to a microlens image. The codec exploits the correlation between neighbor patches using a patch-by-patch prediction mechanism, where each pixel of a patch has his own sparse predictor, designed to utilize only the relevant pixels from its neighbor patch. An intra-patch prediction mask is additionally utilized for sparse predictor design. The patches are labeled into M classes, according to several possible mechanisms, and one sparse design is performed for each pair of (class label; patch pixel). A relevant context selection mirrors the selection of relevant pixels to provide the arithmetic coding with skewed coding distributions at each context.Finally, we embed the proposed image sensor codec into a codec for the light field array of views, which is a generative mechanism, starting by encoding the sensor image or a devigneted and debayered version of it, and then including additional meta-information from the plenoptic camera, finally creating a lossless archive of the light field array of views. We exemplify the performance for two databases that were extensively used in the light field lossless compression literature, showing superior results for both cases.

Categories:
108 Views

This is a dataset of Finite Difference Time Domain (FDTD) simulation results of 13 defective crystals and one non-defective crystal.  There are 4 fields in the dataset, namely: Real, Img, Int, and Attribute. The header real shows a real part of the simulated result, img shows the imaginary part, int gives the intensity all in superimposed form. Attribute denotes the label of a crystal simulated. The label 0 is for the simulated crystal, which is non-defective.  Other 13 labels, from crystal 1 to crystal 13 are assigned to the 13 different crystals whose simulations are studied.

Instructions: 

Read the abstract.

Categories:
138 Views

 

Participants were 61 children with ADHD and 60 healthy controls (boys and girls, ages 7-12). The ADHD children were diagnosed by an experienced psychiatrist to DSM-IV criteria, and have taken Ritalin for up to 6 months. None of the children in the control group had a history of psychiatric disorders, epilepsy, or any report of high-risk behaviors.

 

Instructions: 

 

Extract the Zip files. Load the ".mat" data into MATLAB.

 

If you want to import the electrode location into EEGLAB, please use the attached".ced" file.

 

 

 

Normal
0

false
false
false

EN-US
X-NONE
AR-SA

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:8.0pt;
mso-para-margin-left:0in;
line-height:107%;
mso-pagination:widow-orphan;
font-size:11.0pt;
mso-bidi-font-size:14.0pt;
font-family:"Calibri",sans-serif;
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:"B Nazanin";}

Categories:
3765 Views

The data set is collected from MyNeuroHealth Application developed for the detection of Seizures and Falls. Data is gathered using tri-axial accelerometer placed at the upper left arm of an individual in an unconstraint environment.

Categories:
437 Views

As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on recognizing textures and materials in real-world images, which plays an important role in object recognition and scene understanding. Aiming at describing objects or scenes with more detailed information, we explore how to computationally characterize apparent or latent properties (e.g. surface smoothness) of materials, i.e., computational material characterization, which moves a step further beyond material recognition.

Instructions: 

Dataset Characteristics and Filename Formats

 

The "CoMMonS_FullResolution" folder includes 6912 full-resolution images (2560x1920). The "CoMMonS_Sampled" folder includes sampled images (resolution: 300x300), which are sampled from full-resolution images with different positions (x, y), rotation angles (r), zoom levels (z), a touching direction ("pile"), a lightness condition ("l5"), and a camera function setting ("ed3u"). This "CoMMonS_Sampled" folder is an example of a dataset subset for training and testing (e.g. 5: 1). Our dataset focuses on material characterization for one material (fabric) in terms of one of three properties (fiber length, smoothness, and toweling effect), facilitating a fine-grained texture classification. In this particular case, the dataset is used for a standard supervised problem of material quality evaluation. It takes fabric samples with human expert ratings as training inputs, and takes fabric samples without human subject ratings as testing inputs to predict quality ratings of the testing samples. The texture patches are classified into 4 classes according to each surface property measured by human sense of touch. For example, the human expert rates surface fiber length into 4 levels, from 1 (very short) to 4 (long), and similarly for smoothness and toweling effect. In short, the "CoMMonS_Sampled" folder includes 9 subfolders, each of which includes both sampled images and attribute class labels.

Categories:
258 Views

Pages