Audio dataset for Household Multimodal Environment (HoME). It is a collection of audio samples from the Freesound.org collaborative database of Creative Commons Licensed sounds.
Extract the audio samples in the HoME root directory.
The CREATE database is composed of 14 hours of multimodal recordings from a mobile robotic platform based on the iRobot Create.
Extract all ZIP archives in the same directory (e.g. $HOME/Data/Create).Examples of source code (MATLAB and Python) for loading and displaying the data are included.For more details about the dataset, see the specifications document in the documentation section.
The data is provided as a set of HDF5 files, one per recording session. The files are named to include the location (room) and session identifiers, as well as the recording date and time (ISO 8601 format). The recording sessions related to a particular experiment are stored in a separate folder. Overall, the file hierarchy is as follows:
The following sensors were recorded and made available in the CREATE dataset:
Other relevant information about the recordings is also included:
Experiment I: Navigation in Passive Environments
The robot was moving around a room, controlled by the experimenter using a joystick. Each recorded session was approximately 15 min. There are 4 session recordings per room, with various starting points and trajectories. There was little to no moving objects (including humans) in the room. The robot was directed by the experimenter not to hit any obstacles.
Experiment II: Navigation in Environments with Passive Human Interactions
In this experiment, the robot was moving around a room, controlled by the experimenter using a joystick. Each recorded session was approximately 15 min. There are 4 session recordings per room, with various starting points and trajectories. Note that compared to Experiment I, there was a significant amount of moving objects (including humans) in the selected rooms.
Experiment III: Navigation in Environments with Active Human Interactions
The robot was moving around a room, controlled by the experimenter using a joystick. A second experimenter lifted the robot and changed its position and orientation at random intervals (e.g. once every 10 sec). Each recorded session was approximately 15 min. There are 5 session recordings in a single room.
The authors would like to thank the ERA-NET (CHIST-ERA) and FRQNT organizations for funding this research as part of the European IGLU project.