The data set are images taken from the Particle Image Velocimetry (PIV) method and the Planar Laser-Induced Fluorescence (PLIF) method. These methods set out the macro-scale experimental techniques that can enable fluid dynamic knowledge to inform molecular communication performance and design. Fluid dynamic experiments can capture latent features that allow the receiver to detect coherent signal structures and infer transmit parameters for optimal decoding.

Instructions: 

They are in 22 zip files, the first two files are related to the PIV test and the other are for the PLIF test. PIV test files contain 654 and 1392 images for frame rates of 15 and 90 fps, respectively. The image format is .bmp, and the first image in each set is the calibration image. There are 20 zip files named PLIF-1 to PLIF-20. Each file contains approximately 150 images in .tif format. The first 25 images in each file are calibration images and the rest are images of the laser plane after injection of the fluorescence. All images are recorded with a frame rate of 2 fps so one can calculate the total recording time according to the number of images per file. The MATLAB code has also provided in each file that can help in image processing.

Categories:
85 Views

[Now uploading... Total size is 300GB.]

Instructions: 

Please see the Information Hiding Criteria Ver. 6 document.

 

This dataset includes the IHC standard movies, which are high quality raw movies. The types of size are 2K and 4K.

The original movies are sampled by 16-bit-depth.

 

 

  • 4K-size 16-bit raw movies:

 

 

  • 2K-size 16-bit raw movies:
    1. Basketball_00000001.WAV
    2.  Lego_00000001.WAV
    3.  Library_00000001.WAV
    4.  Walk1_00000001.WAV
    5.  Walk2_00000001.WAV

 

 

  • 2K-size 8-bit raw movies: [5.3GB each]

A 16-bit raw image file is quantized to 8-bit-depth uncompressed AVI files.

  1.  Basketball.avi
  2.  Lego.avi
  3.  Library.avi
  4.  Walk1.avi
  5.  Walk2.avi

 

 

[ Acknowledgments ]

The 2K raw video clips were taken with a Canon Cinema EOS C500 system with support from Canon Inc. The IHC Committee would like to thank this company for its valuable contributions.

 

 

Categories:
118 Views

Endoscopy is a widely used clinical procedure for the early detection of cancers in hollow-organs such as oesophagus, stomach, and colon. Computer-assisted methods for accurate and temporally consistent localisation and segmentation of diseased region-of-interests enable precise quantification and mapping of lesions from clinical endoscopy videos which is critical for monitoring and surgical planning. Innovations have the potential to improve current medical practices and refine healthcare systems worldwide.

Last Updated On: 
Sun, 03/29/2020 - 13:15

This dataset contains light-field microscopy images and converted sub-aperture images. 

 

The folder with the name "Light-fieldMicroscopeData" contains raw light-field data. The file LFM_Calibrated_frame0-9.tif contains 9 frames of raw light-field microscopy images which has been calibrated. Each frame corresponds to a specific depth. The 9 frames cover a depth range from 0 um to 32 um with step size 4 um. Files with name LFM_Calibrated_frame?.png are the png version for each frame.

 

Categories:
141 Views

Experimental results.

Categories:
Category: 
50 Views

These three datasets cover Western, Chinese and Japanese food used for food instance counting and segmentation evaluation.

Categories:
108 Views

Since there is no image-based personality dataset, we used the ChaLearn dataset for creating a new dataset that met the characteristics we required for this work, i.e., selfie images where only one person appears and his face is visible, labeled with the person's apparent personality in the photo.

Instructions: 

Portrait Personality dataset of selfies based on the ChaLearn dataset First Impressions. This dataset consists of 30,935 selfies labeled with apparent personality. Each selfie file was named with the prefix of the original video followed by the frame's number. "bigfive_labels.csv" contains the labels for each trait of the Big Five model, using the prefix (name of the original video). Video frames and models are available at https://github.com/miguelmore/personality.

Categories:
286 Views

We provide a dataset with synthetic white images for the Lytro Illum light field camera with precisely known microlens center coordinates.

The dataset consists of white images taken at different zoom settings as well as different microlens array offset and rotation.

The white images have been raytraced using a thin lens-based camera model. The synthesized white images incorporate natural as well as mechanical vignetting effects.

Instructions: 

The white images are provided as 16-bit PNG files and each white image is uniquely identified by a 8-digit hex code. For every white image, a .csv file is provided containing the microlens center position (in camera coordinates, in meters) where z=0 corresponds to the main lens plane. The sensor position (which is equal to the image distance of the camera configuration) can be calculated from the metadata which is provided in the .metadata file of each white image which is correpsonds to the JSON format. The metadata file provides:

  • the sensor size in px

  • the pixel pitch in m

  • the focus distance in m

  • the main lens radius in m

  • the microlens radius in m

  • the camera's f-number

  • the microlens array rotation angle in rad

  • the amount of grid noise

 

Please have a look at the provided GitLab repository (https://gitlab.com/iiit-public/papers/ml-grid-estimation-lf-decoding-and...) for examples of usage.

Categories:
Category: 
165 Views

Calibration datasets used in the article Standard Plenoptic Cameras Mapping to Camera Arrays and Calibration based on DLT. These datasets were acquired with a Lytro Illum camera using two calibration grids with different sizes: 8 × 6 grid of 211 × 159 mm (Big Pattern) with approximately 26.5 mm cells, and 20×20 grid of 121.5 × 122 mm (Small Pattern) with approximately 6.1 mm cells. Each dataset acquired is composed of 66 fully observable poses of the calibration pattern.

Instructions: 

The dataset is divided into the following zip files:

  • GD44M00145_WhiteImages: White image database of the Lytro Illum camera used to acquire the datasets.

  • Big Pattern 2D - Full: Calibration dataset with 66 poses of the big calibration grid.

  • Big Pattern 2D - Sample: Calibration dataset with 10 poses of the big calibration grid.

  • Big Pattern 2D - Sample Reduced: Calibration dataset with 5 poses of the big calibration grid.

  • Small Pattern 2D - Full: Calibration dataset with 66 poses of the small calibration grid.

  • Small Pattern 2D - Sample: Calibration dataset with 10 poses of the small calibration grid.

  • Small Pattern 2D - Sample Reduced: Calibration dataset with 5 poses of the small calibration grid.

  • Object: Objects dataset with the same acquisition conditions as the calibration datasets.

  • PlenCalCVPR2013Datasets: Lytro images used in the article for Lytro 1st generation calibration.

 

In order to obtain the lightfield associated with each image, you should read the Lytro raw image files (.lfp) using Dansereau's calibration toolbox (https://github.com/doda42/LFToolbox) and the white images provided here. The calibration of these datasets can be performed using the calibration toolbox provided in the article (http://www.isr.tecnico.ulisboa.pt/~nmonteiro/articles/plenoptic/tcsvt2019).

Categories:
231 Views

Pages