HQA1K Hologram Perceptual Quality Assessment Dataset

Citation Author(s):
M.Hossein
Eybposh
The University of North Carolina at Chapel Hill
Changjia
Cai
The University of North Carolina at Chapel Hill
Aram
Moossavi
The University of North Carolina at Chapel Hill
Jose
Rodriguez Romaguera
The University of North Carolina at Chapel Hill
Nicolas
Pégard
The University of North Carolina at Chapel Hill
Submitted by:
Nicolas Pegard
Last updated:
Mon, 07/10/2023 - 13:04
DOI:
10.21227/91ka-4s51
Data Format:
License:
0
0 ratings - Please login to submit your rating.

Abstract 

The HQA1K dataset was developed for assessing the quality of Computer Generated Holography (CGH) image renderings based on direct human input.
HQA1K is comprised of 1,000 pairs of natural images matched to simulated CGH renderings of various quality levels. The result is a diverse set of data for evaluating image quality algorithms and models.
1000 reference images were sourced from KonIQ-10k [1]. Every image pair in the dataset has been assessed by 13 individual human observers. Observers were asked to rate the quality as 1,2,3,4,or 5. Observers were told that a rating of ‘5’ corresponds to a perfect match, and a ‘1’ to a poor match. All observers provided ratings for all image pairs in the dataset in a randomized order.    
Evaluations from the 13 observers have been averaged to derive a Mean Opinion Score (MOS) and provide a human-rated standard of comparison for each image pair.  These MOS scores serve as a benchmark for the development and evaluation of IQA algorithms.

 

HQA1K is freely available to the research community just like its parent dataset KonIQ-10k. We encourage users to cite the associated publication [2].

 

The HQA1K dataset is stored in the HDF5 format, and includes three distinct datasets: "CGH", "Tar", and "MOS".

"CGH" contains the CGH simulated renderings.

"Tar" includes the original natural images.

"MOS" contains the Mean Opinion Scores corresponding to each image pair.

 

 

M.Hossein Eybposh, Changjia Cai, Aram Moossavi, Jose Rodriguez-Romaguera, and Nicolas C. Pégard, 2023, The University of North Carolina at Chapel Hill.

 

[1] V. Hosu, H. Lin, T. Sziranyi and D. Saupe, "KonIQ-10k: An Ecologically Valid Database for Deep Learning of Blind Image Quality Assessment," in IEEE Transactions on Image Processing, vol. 29, pp. 4041-4056, 2020, doi: 10.1109/TIP.2020.2967829.

 

[2] M.Hossein Eybposh, Changjia Cai, Aram Moossavi, Jose Rodriguez-Romaguera, and Nicolas C. Pégard, “ConIQA and HQA1k: Method and Dataset for Perceptual Image Quality Assessment With Consistency Training”, manuscript in preparation.

Instructions: 

To access the HQA1K dataset, users can employ the following Python code after you unzip the downloaded file:

import numpy as np

import h5py as h5

file_path = 'HQA1K.h5'

with h5.File(file_path, 'r') as f:

    targets = f['Tar'][:]

    renderings = f['CGH'][:]

    mos = f['MOS'][:]

Documentation