S3E: A Multi-Robot Multimodal Dataset for Collaborative SLAM

Citation Author(s):
Dapeng
Feng
Yuhua
Qi
Shipeng
Zhong
Zhiqiang
Chen
Qiming
Chen
Hongbo
Chen
Jin
Wu
Jun
Ma
Submitted by:
Dapeng Feng
Last updated:
Mon, 08/12/2024 - 05:26
DOI:
10.21227/rrcw-fv27
Data Format:
Links:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

 The burgeoning demand for collaborative robotic systems to execute complex tasks collectively has intensified the research community's focus on advancing simultaneous localization and mapping (SLAM) in a cooperative context. Despite this interest, the scalability and diversity of existing datasets for collaborative trajectories remain limited, especially in scenarios with constrained perspectives where the generalization capabilities of Collaborative SLAM (C-SLAM) are critical for the feasibility of multi-agent missions. Addressing this gap, we introduce S3E, an expansive multimodal dataset. Captured by a fleet of unmanned ground vehicles traversing four distinct collaborative trajectory paradigms, S3E encompasses 13 outdoor and 5 indoor sequences. These sequences feature meticulously synchronized and spatially calibrated data streams, including 360-degree LiDAR point cloud, high-resolution stereo imagery, high-frequency inertial measurement units (IMU), and Ultra-wideband (UWB) relative observations. Our dataset not only surpasses previous efforts in scale, scene diversity, and data intricacy but also provides a thorough analysis and benchmarks for both collaborative and individual SLAM methodologies. For access to the dataset and the latest information, please visit our repository at https://pengyu-team.github.io/S3E/.

 

 

 

Instructions: 

 For access to the dataset and the latest information, please visit our repository at https://pengyu-team.github.io/S3E/.

 

 

Dataset Files

    Files have not been uploaded for this dataset