video quality assessment
Investigating how people perceive virtual reality videos in the wild (i.e., those captured by everyday users) is a crucial and challenging task in VR-related applications due to complex authentic distortions localized in space and time. Existing panoramic video databases only consider synthetic distortions, assume fixed viewing conditions, and are limited in size. To overcome these shortcomings, we construct the VR Video Quality in the Wild (VRVQW) database, which is one of the first of its kind, and contains 502 user-generated videos with diverse content and distortion characteristics.
- Categories:
This is a dataset of 120 error-concealed video clips. The clips were generated from 6 CIF, 6 HD and 6 Full-HD test video sequences. Each of those sequences was error concealed with 4 Error Concealment (EC) techniques: Motion Copy, Motion Vector Extrapolation, Decoder Motion Vector Estimation (DMVE) + Boundary Matching Algorithm (BMA), and Adaptive Error Concealment Order Determination (AECOD). The dataset also includes the original (loss free) video clips, as well as the subjective ranking of the error-concealed videos.
- Categories: