SDR-HDR Video pair Saliency Dataset (SDR-HDR-VSD)

Citation Author(s):
Junhua
Chen
the College of Electronic and Information Engineering, Shenzhen University, Shenzhen Key Laboratory of Digital Creative Technology
Jiongzhi
Lin
the College of Electronic and Information Engineering, Shenzhen University, Shenzhen Key Laboratory of Digital Creative Technology
Fei
Zhou
the College of Electronic and Information Engineering, Shenzhen University, Guangdong Key Laboratory of Intelligent Information Processing, Peng Cheng Laboratory
Guoping
Qiu
the College of Electronic and Information Engineering, Shenzhen University, Guangdong Hong Kong Joint Laboratory for Big Data Imaging and Communication, the School of Computer Science, University of Nottingham
Submitted by:
lin zhi
Last updated:
Fri, 04/26/2024 - 09:58
DOI:
10.21227/th94-x785
License:
0
0 ratings - Please login to submit your rating.

Abstract 

Visual saliency prediction has been extensively studied in the context of standard dynamic range (SDR) display. Recently, high dynamic range (HDR) display has become popular, since HDR videos can provide the viewers more realistic visual experience than SDR ones. However, current studies on visual saliency of HDR videos, also called HDR saliency, are very few. Therefore, we establish an SDR-HDR Video pair Saliency Dataset (SDR-HDR-VSD) for saliency prediction on both SDR and HDR videos. This dataset is designed to support the studies that whether differences exist between the visual saliency of SDR and HDR videos, which laying a solid foundation for the future investigations about HDR video saliency. The SDR-HDR-VSD dataset contains 200 pairs of SDR and HDR videos with resolution of 3840×2160, whose duration time ranges from 6 seconds to approximately 1 minute. Each video in the dataset has been meticulously crafted to ensure a diverse representation of scene content, including living creature, Scenery and so on. For the fixation map, A total of 64 participants with normal vision are involved in our eye-movement experiment, and divided into two groups employed for HDR videos and SDR videos. The gaze points collected from all 32 participants constitute the fixation map for a video frame.

Instructions: 

The DATASET FILES consist of four sub-files: video, fixation, readme, and fix2map.py.

1-video:Contains 200 SDR (Standard Dynamic Range) videos and 200 HDR (High Dynamic Range) videos. In *.mp4 formats.
2-fixation: Contains eye-tracking fixation data corresponding to all SDR and HDR videos. In **.mat format.
3-fix2map.py: Code used to convert eye-tracking fixation data into Saliency maps.
4-readme: Contains instructions on how to convert videos into corresponding video frames to align with eye-tracking fixation data.

Documentation

AttachmentSize
File readme.txt373 bytes