Microfluidic Lab-on-a-dish (3D printing).


O.V. Gradov group, INEPCP RAS, 2017-2018.





This database includes data measured by Qualcomm's 60GHz mmWave Radar.

It includes:

Face signature data base: radar face scan data of 206 individuals for face recognition. 




This dataset consists of raw EEG data from 48 subjects who participated in a multitasking workload experiment utilizing the SIMKAP multitasking test. The subjects’ brain activity at rest was also recorded before the test and is included as well. The Emotiv EPOC device, with sampling frequency of 128Hz and 14 channels was used to obtain the data, with 2.5 minutes of EEG recording for each case. Subjects were also asked to rate their perceived mental workload after each stage on a rating scale of 1 to 9 and the ratings are provided in a separate file.


The data for each subject follows the naming convention: subno_task.txt. For example, sub01_lo.txt would be raw EEG data for subject 1 at rest, while sub23_hi.txt would be raw EEG data for subject 23 during the multitasking test. The rows of each datafile corresponds to the samples in the recording and the columns corresponds to the 14 channels of the EEG device: AF3, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC6, F4, F8, AF4, respectively.

The ratings for each subject is given in a separate file ratings.txt. They are given in a comma separated value format: subject number, rating at rest, rating for test. For example: 1, 2, 8 would be subject 1, rating of 2 for “at rest”, rating of 8 for “test”. Note that ratings for subjects 5, 24 and 42 are unavailable.


The frequency domain measurement of the scattering parameter, S21, of the wireless channel was carried out using the ZVB14 Vector Network Analyzer (VNA) from Rhode and Schwartz. The measurement system consists of the VNA, low loss RF cables, and omnidirectional antennas at the transmitter and receiver ends. The transmitter and receiver heights were fixed at 1.5 m. A program script was written for the VNA to measure 10 consecutive sweeps: each sweep contains 601 frequency sample points with spacing of 0.167 MHz to cover a 100 MHz band centered at 2.4 GHz.


Visual tracking methods have achieved a successful development in recent years. Especially the Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. The advancement in DCF tracking performance is predominantly attributed to powerful features and sophisticated online learning formulations. However, it would come to some troubles if the tracker learns the samples indiscriminately.


A novel surface-enhanced Raman scattering substrate based on integrated circuit (IC) process was designed, using photolithography, etching and other processes on the silicon wafer processing. Its surface morphology and Raman activity were characterized and tested. The relationship between the substrate’s photolithographic pattern and its Raman activity, stability and reproducibility has been analyzed and verified.


The CREATE database is composed of 14 hours of multimodal recordings from a mobile robotic platform based on the iRobot Create.


Provided Files

  • CREATE-hdf5-e1.zip          :   HDF5 files for Experiment I
  • CREATE-hdf5-e2.zip          :   HDF5 files for Experiment II
  • CREATE-hdf5-e3.zip          :   HDF5 files for Experiment III
  • CREATE-preview.zip          :   Preview MP4 videos and PDF images
  • CREATE-doc-extra.zip       :   Documentation: CAD files, datasheets and images
  • CREATE-source-code.zip  :   Source code for recording, preprocessing and examples


Extract all ZIP archives in the same directory (e.g. $HOME/Data/Create).Examples of source code (MATLAB and Python) for loading and displaying the data are included.For more details about the dataset, see the specifications document in the documentation section. 

Dataset File Format

The data is provided as a set of HDF5 files, one per recording session. The files are named to include the location (room) and session identifiers, as well as the recording date and time (ISO 8601 format). The recording sessions related to a particular experiment are stored in a separate folder. Overall, the file hierarchy is as follows:



Summary of Available Sensors

The following sensors were recorded and made available in the CREATE dataset:

  • Left and right RGB cameras (320x240, JPEG, 30 Hz sampling rate)
  • Left and right optical flow fields (16x12 sparse grid, 30 Hz sampling rate)
  • Left and right microphones (16000 Hz sampling rate, 64 ms frame length)
  • Inertial measurement unit: accelerometer, gyroscope, magnetometer (90 Hz sampling rate)
  • Battery state (50 Hz sampling rate)
  • Left and right motor velocities (50 Hz sampling rate)
  • Infrared and contact sensors (50 Hz sampling rate)
  • Odometry (50 Hz sampling rate)
  • Atmospheric pressure (50 Hz sampling rate)
  • Air temperature (1 Hz sampling rate)


Other relevant information about the recordings is also included:

  • Room location, date and time of the session.
  • Stereo calibration parameters for the RGB cameras.


Summary of Experiments

Experiment I: Navigation in Passive Environments

The robot was moving around a room, controlled by the experimenter using a joystick. Each recorded session was approximately 15 min. There are 4 session recordings per room, with various starting points and trajectories. There was little to no moving objects (including humans) in the room. The robot was directed by the experimenter not to hit any obstacles. 

Experiment II: Navigation in Environments with Passive Human Interactions

In this experiment, the robot was moving around a room, controlled by the experimenter using a joystick. Each recorded session was approximately 15 min. There are 4 session recordings per room, with various starting points and trajectories. Note that compared to Experiment I, there was a significant amount of moving objects (including humans) in the selected rooms. 

Experiment III: Navigation in Environments with Active Human Interactions

The robot was moving around a room, controlled by the experimenter using a joystick. A second experimenter lifted the robot and changed its position and orientation at random intervals (e.g. once every 10 sec). Each recorded session was approximately 15 min. There are 5 session recordings in a single room. 


The authors would like to thank the ERA-NET (CHIST-ERA) and FRQNT organizations for funding this research as part of the European IGLU project.



This is the data competion hosted by the IEEE Machine Learning for Signal Processing (MLSP) Technical Committee as part of the 27th IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2017), Tokyo, Japan. This year the competion is based on a dataset kindly provided Petroleum Geo-Systems (PGS), on source separation for seismic data acquistion. 

Last Updated On: 
Tue, 05/01/2018 - 15:07
Citation Author(s): 
IEEE MLSP Technical Committee

The dataset contains depth frames collected using Microsoft Kinect v1 during the execution of food and drink intake movements.