The Subjective Tile-based Assessment for 360° Videos (STAV360)

- Citation Author(s):
-
Stamatia Rizou (Singular Logic)Andreas Panayides (CYENS Centre of Excellence)Nikolaos Kantartzis (Aristotle University of Thessaloniki)George Karagiannidis (Aristotle University of Thessaloniki)Pavlos Lazaridis (University of Huddersfield)Zaharias Zaharis (Aristotle University of Thessaloniki)
- Submitted by:
- Moatasim Mahmoud
- Last updated:
- DOI:
- 10.21227/bcpt-3904
- Data Format:
- Links:
- Categories:
- Keywords:
Abstract
The STAV360 dataset is a subjective 360° video dataset.
It contains six source 360° videos, which are used to construct 72 test sequences using twelve tile encoding patterns. The dataset also includes the subjective ratings and head movement trajectories of 27 users, who watched the videos using a Quest 3 VR HMD. All participants volunteered to take part in this experiment and gave their consent after reading the experiment description and purposes. The 360° videos in the STAV360 dataset were captured using an Insta360 Action Camera X4 at 8K resolution (7680 × 4320) and 30fps. The camera settings were set to encode the videos using HEVC/H.265 at the highest possible bitrate (200 Mbps). Since the captured videos are stored at high bitrates, they can reliably be used as reference sequences in quality assessment of encoded instances at lower rates.
Instructions:
Users_Ratings: Ratings (1-5) of the stitched videos as reported by the users following the absolute category rating (ACR) method.
Users_HM_Traces: Contains the viewing traces of each user, and each video instance. Given as Timestamp, VideoTime, VideoFrame, HeadYaw, HeadPitch, HeadRoll, HeadQuatW, HeadQuatX, HeadQuatY, HeadQuatZ.
Tile_Encoding_Patterns: Contains the tiling patterns (in .json format) used for stitching the constructed videos. The files contain the pattern names and the tile qualities used in each pattern.