Datasets
Standard Dataset
Video Shot Occlusion Detection DataSet
- Citation Author(s):
- Submitted by:
- Junhua Liao
- Last updated:
- Wed, 07/12/2023 - 11:38
- DOI:
- 10.21227/gfgt-3c35
- Research Article Link:
- Links:
- License:
- Categories:
- Keywords:
Abstract
As a hot research topic, there are many related datasets for occlusion detection. Due to the different scenarios and definitions of occlusion for different tasks, there are significant differences between different occlusion detection datasets, making existing datasets difficult to apply to the video shot occlusion detection task. To this end, we contribute the first large-scale video shot occlusion detection dataset, namely VSOD, which serves as a benchmark for evaluating the performance of shot occlusion detection methods.
This dataset consists of 200 videos of large-scale activities in the real world. These video data are selected from YouTube by four annotators with knowledge of computer vision, taking several months. To evaluate the performance of video shot occlusion detection methods, we need to know the temporal annotations, i.e., the start and end time points of the occlusion event in each video. To this end, we invite an editor with seven years of experience to annotate the time extent of occlusion events.
The dataset comprises a Video folder and the label_VSOD2.py file. The Video folder contains 200 video files, while the label_VSOD2.py provides information regarding the time intervals in which occlusions occur in these 200 video files.