Datasets
Standard Dataset
IST - Subjective Evaluation of Viewport Rendering Projections
- Citation Author(s):
- Submitted by:
- FALAH RAHIM
- Last updated:
- Thu, 11/10/2022 - 11:36
- DOI:
- 10.21227/vb16-8h71
- Data Format:
- License:
- Categories:
- Keywords:
Abstract
A crowdsourcing subjective evaluation of viewport images obtained with several sphere-to-plane projections was conducted. The viewport images were rendered from eight omnidirectional images in equirectangular format. The pairwise comparison (PC) method was chosen for the subjective evaluation of projections. More details about the viewport images and subjective evaluation procedure can be found in [1].
[1] F. Jabar, J. Ascenso, and M.P. Queluz, “Globally and Locally Optimized Pannini Projection for Viewport Rendering of 360 Images”, Submitted to J. Vis. Commun. Image Represent., Oct. 2022.
- The compared projections are: Pannini projection (PP) with fixed parameters (d = 0.5, vc = 0); general perspective projection (GPP) with fixed parameter, (d = 0.5); and Optimized Pannini (OP) and multiple optimized Pannini (MOP) projections proposed in [2], globally adapted Pannini projection (GA-PP) and globally and locally adapted Pannini projection (GLA-PP) proposed in [1].
- To have different image content characteristics, two groups of images, G1 and G2, were taken from two different datasets: G1 from [2] and G2 from [3]. Per image, one viewing direction was considered. Thus, six viewports were obtained, corresponding to the considered projections.
- G1: 360º images (Dance, Bedroom, Office 1, Office 2), Dataset ([2]), Projections (GPP, PP, OP, MOP, GA-PP, GLA-PP), Number of Comparisons = 60.
- G2: 360º images (Car repair, Conference, Dinner 2, Bus), Dataset ([3]), Projections (GPP, PP, GA-PP, GLA-PP), Number of Comparisons = 24.
[1] F. Jabar, J. Ascenso, and M.P. Queluz, “Globally and Locally Optimized Pannini Projection for Viewport Rendering of 360 Images”, Submitted to J. Vis. Commun. Image Represent., Oct. 2022.
[2] Y. W. Kim, D. Jo, C. Lee, H. Choi, Y. H. Kwon, and K. Yoon, “Automatic Content-aware Projection for 360° Videos,” in 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, Dec. 2017.
[3] J. Gutiérrez, E. J. David, A. Coutrot, M. Silva, and P. Le Callet, “Introducing UN Salient360!Benchmark: A Platform for Evaluating Visual Attention Models for 360° contents,” in International Conference on Quality of Multimedia Experience (QoMEX), Sardinia, Italy, May 2018.