Congratulations! You have been automatically subscribed to IEEE DataPort and can access all datasets on IEEE DataPort!
Congratulations! You have been automatically subscribed to IEEE DataPort and can access all datasets on IEEE DataPort!
The OnePose dataset contains over 450 video sequences of 150 objects. For each object, multiple video recordings, accompanied camera poses and 3D bounding box annotations are provided. These sequences are collected under different background environments, and each has an average recording-length of 30 seconds covering all views of the object. The dataset is randomly divided into training and validation sets. For each object in the validation set, we assign one mapping sequence for building the SfM map and use a test sequence for the evaluation.
We collect an SfM dataset composed of 17 object-centric texture-poor scenes with accurate ground-truth poses. In our dataset, low-textured objects are placed on a texture-less plane. For each object, we record a video sequence of around 30 seconds surrounding the object. The ground-truth poses per frame are estimated by ARKit and BA post-processing, with the help of textured markers, which are cropped in the test images. To impose larger viewpoint changes, we sample 60 subset image bags for each scene, similar to the IMC dataset.