This dataset accompanies a paper titled "Detection of Metallic Objects in Mineralised Soil Using Magnetic Induction Spectroscopy". 

Instructions: 

Every sweep of the detector over an object is contained in a different file, with the following file naming convention being used: ___.h5, where is globally unique identifier for the file. Each file is a HDF5 file generated using Pandas, containing a single DataFrame. The DataFrame contains 8 columns. The first three correspond to the x-, y- and z-position (in cm) relative to an arbitrary datum. The arbitrary datum stays constant for all sweeps over all objects in a given combination of soil and depth. The other 5 columns contain the complex transimpedance values as measured by the MIS system, after calibration against the ferrite piece. Due to experimental constraints, there is no data for one of the rocks buried at 10 cm depth in "Rocky" soil.

Categories:
45 Views

The Heidelberg Spiking Datasets comprise two spike-based classification datasets: The Spiking Heidelberg Digits (SHD) dataset and the Spiking Speech Command (SSC) dataset. The latter is derived from Pete Warden's Speech Commands dataset (https://arxiv.org/abs/1804.03209), whereas the former is based on a spoken digit dataset recorded in-house and included in this repository. Both datasets were generated by applying a detailed inner ear model to audio recordings. We distribute the input spikes and target labels in HDF5 format.

Instructions: 

We provide two distinct classification datasets for spiking neural networks. | Name | Classes | Samples (train/valid/test) | Parent dataset | URL | | ---- | ------- | ------ | ------------------------- | --- | | SHD | 20 | 8332/-/2088 | Heidelberg Digits (HD) | https://compneuro.net/datasets/hd_audio.tar.gz | | SSC | 35 | 75466/9981/20382 | Speech Commands v0.2 | https://arxiv.org/abs/1804.03209 | Both datasets are based on respective audio datasets. Spikes in 700 input channels were generated using an artificial cochlea model. The SHD consists of approximately 10000 high-quality aligned studio recordings of spoken digits from 0 to 9 in both German and English language. Recordings exist of 12 distinct speakers two of which are only present in the test set. The SSC is based on the Speech Commands release by Google which consists of utterances recorded from a larger number of speakers under less controlled conditions. It contains 35 word categories from a larger number of speakers.

Categories:
428 Views

The CREATE database is composed of 14 hours of multimodal recordings from a mobile robotic platform based on the iRobot Create.

Instructions: 

Provided Files

  • CREATE-hdf5-e1.zip          :   HDF5 files for Experiment I
  • CREATE-hdf5-e2.zip          :   HDF5 files for Experiment II
  • CREATE-hdf5-e3.zip          :   HDF5 files for Experiment III
  • CREATE-preview.zip          :   Preview MP4 videos and PDF images
  • CREATE-doc-extra.zip       :   Documentation: CAD files, datasheets and images
  • CREATE-source-code.zip  :   Source code for recording, preprocessing and examples

 

Extract all ZIP archives in the same directory (e.g. $HOME/Data/Create).Examples of source code (MATLAB and Python) for loading and displaying the data are included.For more details about the dataset, see the specifications document in the documentation section. 

Dataset File Format

The data is provided as a set of HDF5 files, one per recording session. The files are named to include the location (room) and session identifiers, as well as the recording date and time (ISO 8601 format). The recording sessions related to a particular experiment are stored in a separate folder. Overall, the file hierarchy is as follows:

<EXP_ID>/<LOC_ID>/<EXP_ID>_<LOC_ID>_<SESS_ID>_<DATETIME>.h5

 

Summary of Available Sensors

The following sensors were recorded and made available in the CREATE dataset:

  • Left and right RGB cameras (320x240, JPEG, 30 Hz sampling rate)
  • Left and right optical flow fields (16x12 sparse grid, 30 Hz sampling rate)
  • Left and right microphones (16000 Hz sampling rate, 64 ms frame length)
  • Inertial measurement unit: accelerometer, gyroscope, magnetometer (90 Hz sampling rate)
  • Battery state (50 Hz sampling rate)
  • Left and right motor velocities (50 Hz sampling rate)
  • Infrared and contact sensors (50 Hz sampling rate)
  • Odometry (50 Hz sampling rate)
  • Atmospheric pressure (50 Hz sampling rate)
  • Air temperature (1 Hz sampling rate)

 

Other relevant information about the recordings is also included:

  • Room location, date and time of the session.
  • Stereo calibration parameters for the RGB cameras.

 

Summary of Experiments

Experiment I: Navigation in Passive Environments

The robot was moving around a room, controlled by the experimenter using a joystick. Each recorded session was approximately 15 min. There are 4 session recordings per room, with various starting points and trajectories. There was little to no moving objects (including humans) in the room. The robot was directed by the experimenter not to hit any obstacles. 

Experiment II: Navigation in Environments with Passive Human Interactions

In this experiment, the robot was moving around a room, controlled by the experimenter using a joystick. Each recorded session was approximately 15 min. There are 4 session recordings per room, with various starting points and trajectories. Note that compared to Experiment I, there was a significant amount of moving objects (including humans) in the selected rooms. 

Experiment III: Navigation in Environments with Active Human Interactions

The robot was moving around a room, controlled by the experimenter using a joystick. A second experimenter lifted the robot and changed its position and orientation at random intervals (e.g. once every 10 sec). Each recorded session was approximately 15 min. There are 5 session recordings in a single room. 

Acknowledgements

The authors would like to thank the ERA-NET (CHIST-ERA) and FRQNT organizations for funding this research as part of the European IGLU project.

 

Categories:
1303 Views