A Multispectral Light Field Dataset for Light Field Deep Learning

Citation Author(s):
Karlsruhe Institute of Technology
Karlsruhe Institute of Technology
Submitted by:
Maximilian Schambach
Last updated:
Tue, 05/17/2022 - 22:21
Research Article Link:
0 ratings - Please login to submit your rating.


Deep learning undoubtedly has had a huge impact on the computer vision community in recent years. In light field imaging, machine learning-based applications have significantly outperformed their conventional counterparts. Furthermore, multi- and hyperspectral light fields have shown promising results in light field-related applications such as disparity or shape estimation. Yet, a multispectral light field data\-set, enabling data-driven approaches, is missing. Therefore, we propose a new synthetic multispectral light field dataset with depth and disparity ground truth. The dataset consists of a training, validation and test dataset, containing light fields of randomly generated scenes, as well as a challenge dataset rendered from hand-crafted scenes enabling detailed performance assessment.



When using this dataset, please cite our corresponding paper:

Maximilian Schambach and Michael Heizmann:

"A Multispectral Light Field Dataset and Framework for Light Field Deep Learning"

IEEE Access, vol. 8, pp. 193492-193502, 2020

DOI: 10.1109/ACCESS.2020.3033056



The dataset consists of 500 randomly generated scenes as well as 7 hand-crafted scenes for detailed performance evaluation. 

The scenes are rendered as multispectral light fields of shape (11, 11, 512, 512, 13) with depth and disparity ground truth of every subaperture view.

The lightfields are provided with 16bit uint precision, the depth and disparities with 32bit float precision.

The scenes are rendered in two different camera configurations: one corresponding to a light field camera in the unfocused design (plenoptic 1.0) with a focused main lens (annotated with "F") and one where the main lens is focussed at infinity (annotated with "INF") )which is equivalent to a camera array with parallel optical axes. In the "F" configuration, disparities range from ca. -2.5px to 3px where a disparity of 0px corresponds to the focal plane. In the "INF" configuration, the focus is set to infinity, hence all disparities are positive.

We provide the raw rendered data, the abstract source files (so rendering of additional ground truth is possible) as well as multiple pre-patch and converted versions.


Dataset Content

We provide the following dataset files for downloading:



Contains the complete RAW rendered data (both the F and INF camera configuration), including the multispectral light fields in the ENVI format, traced depth maps and converted disparity maps for every subaperture view in the PFM format. 

The light fields are of shape (11, 11, 512, 512, 13), the depth and disparity of shape (11, 11, 512, 512, 1) and saved as 2D images in the Subaperture Image view. 

To load the light fields and disparities, you may use our Python library plenpy.

To patch the RAW data into a .h5 dataset, see the provided Python scripts contained in SCRIPTS.zip . 

We provide pre-patched versions of the dataset (see below). If the pre-patched version do not fit your needs (e.g. you need a different spatial resolution), use our provided patch script.

The patched .h5 data can then directly be used with our deep learning framework LFCNN.



Contain the hand-crafted dataset challenges. Includes the RAW rendered data, as well as conversions to Numpy's .npy format in the case of (11, 11) and (9,9) angular resolution.

Furhter contains composed h5 files to be used directly with our deep learning framework LFCNN.



Identical to CHALLENGES_MULTISPECTRAL but converted to RGB.


Pre-patched data


Multispectral dataset in the "F" configuration, patched to (11, 11, 36, 36, 13) light field patches with the corrresponding disparity map of the central view.


Multispectral dataset in the "F" configuration, patched to (9, 9, 36, 36, 13) light field patches with the corrresponding disparity map of the central view.


Same as previous, but in the "INF" camera configuration.


Same as DATASET_MULTISPECTRAL_PATCHED_F_9x9_36x36.zip but converted to RGB.


Same as DATASET_MULTISPECTRAL_PATCHED_INF_9x9_36x36.zip but converted to RGB.


The SCRIPTS.zip file contains a script to convert a set of light fields to a patched set saved in the .h5 format. 

Use these scripts to patch the raw dataset into light field patches of a self-defined shape.

See the scsript for comments on usage.