We provide two folders: 

(1)The shallow depth of field image data set folder consists of 27 folders from 1 to 27. 

In folder 1-27, each folder contains two test images and two word files. Img1 is the shallow depth of field image with the best focusing state taken with a 300 mm long focal lens, and img2 is the overall blurred image. 

Instructions: 

Readme contains a detailed description of the database and experimental results

Categories:
129 Views

The network attacks are increasing both in frequency and intensity with the rapid growth of internet of things (IoT) devices. Recently, denial of service (DoS) and distributed denial of service (DDoS) attacks are reported as the most frequent attacks in IoT networks. The traditional security solutions like firewalls, intrusion detection systems, etc., are unable to detect the complex DoS and DDoS attacks since most of them filter the normal and attack traffic based upon the static predefined rules.

Categories:
1415 Views

 

Electric utilities collect imagery and video to inspect transmission and distribution infrastructure.  Utilities use this information to identify infrastructure defects and prioritize maintenance decisions.  The ability to collect these data is quickly outpacing the ability to analyze it.   Today’s data interpretation solutions rely on human-in-the-loop workflows.  This is time consuming, costly, and inspection quality can be subjective.  It’s likely some of these inspection tasks can be automated by leveraging machine learning techniques and artificial intelligence.

Last Updated On: 
Tue, 09/21/2021 - 17:59
Citation Author(s): 
P. Kulkarni, D. Lewis, J. Renshaw

Rot corn grain image

Instructions: 

Download dataset

Categories:
157 Views

For comparing the performance of IQA methods, a database of confocal endoscopy image obtained in practical imaging conditions is proposed. There are 642 grayscale images with authentic distortion of 1024 × 1024 pixels in the database. Quality of the images were rated by 8 experienced researchers in operation and image processing of confocal endoscopy by the range of 1-5, where 1 denotes the lowest quality and 5 denotes the highest quality. Finally, the MOS of the images was computed by averaging the scores of the researchers.

Categories:
112 Views

Experimental data of manuscript "CFAR algorithm based on different probabilit models for ocean target detection"

Instructions: 

Experimental data of manuscript "CFAR algorithm based on different probabilit models for ocean target detection"

Categories:
28 Views

The SoftCast scheme has been proposed as a promising alternative to traditional video broadcasting systems in wireless environments. In its current form, SoftCast performs image decoding at the receiver side by using a Linear Least Square Error (LLSE) estimator. Such approach maximizes the reconstructed quality in terms of Peak Signal-to-Noise Ratio (PSNR). However, we show that the LLSE induces an annoying blur effect at low Channel Signal-to-Noise Ratio (CSNR) quality. To cancel this artifact, we propose to replace the LLSE estimator by the Zero-Forcing (ZF) one.

Instructions: 

For more information, please refer to the following paper:

Anthony Trioux, Giuseppe Valenzise, Marco Cagnazzo, Michel Kieffer, François-Xavier Coudoux, et al., A Perceptual Study of the Decoding Process of the SoftCast Wireless Video Broadcast Scheme. 2021 IEEE Workshop on Multimedia Signal Processing (MMSP), Oct. 2021, Tampere, Finland.

The SoftCast Database consists of 8 RAW HD reference videos and 156 cropped videos transmitted and received through the SoftCast linear video coding and transmission scheme considering either the LLSE or the ZF estimator. Each video has a duration of 5 seconds. Note that only the luminance is considered in this database. Furthermore, the number of frames depends on the framerate of the video (125 frames for 25fps and 150frames for 30fps).

The GoP-size was set to 32 frames, 2 compression ratio (CR) were considered: CR=1 (no compression applied) and CR=0.25 (75% of the DCT coefficients are discarded before transmission). The Channel Signal-to-Noise Ratio (CSNR) considered in this test vary from 0 to 27dB by 3dB step. This database was evaluated by 30 participants (9 women and 21 men). They were asked to select which one of the two displayed version of the reconstructed videos they prefered based on a Forced-choice PairWise Comparison test. A training session was organized prior to the test for each observer in order to familiarize them with the procedure. 

Video files are named using the following structure:

Video_filename_y_only_GoP_32_CR_X_Y_ZdB_crop.yuv where X equals either 1 or 0.25 Y refers to the estimator used (ZF or LLSE) and Z is either equal to 0,3,6,9,12,15,18,21,24 or 27dB.

The original video files are denoted: Video_filename_y_only_crop.yuv.

Each video file is in *.yuv format (4:2:0) where the chrominance plans are all set to 128. (This process allows to perform the VMAF computation as VMAF requires either a yuv420p, yuv422p, yuv444p, yuv420p10le, yuv422p10le or yuv444p10le video format).

The preference scores for each of the stimuli are available in the PWC_scores.xls file.

The objective scores (frame by frame) for each videos are available in the objective_scores_ZF_LLSE.zip file.

Categories:
126 Views

<p>Our data set contains five subsets, which are Seadata, RCSdata, RD_SeaImage, BP_SeaImage and SSHdata. Seadata is the data of simulated sea. RCSdata is the data of sea surface backward scattering coefficient. RD_SeaImage is the simulated images of sea surface. BP_SeaImage is the simulated images of sea surface. SSHdata is the sea surface height data.</p>

Instructions: 

<p>The datasets were used for Our manuscript “Sea Surface Imaging Simulation for 3D Interferometric Imaging Radar Altimeter”(DOI: 10.1109/JSTARS.2020.3033164) . This readme file details the specific meaning of the data contained in the five subsets of this dataset. All data are stored in .mat and can be opened and used with MATLAB. When using data sets, it is recommended to use the version above matlab 2016.</p>

Categories:
19 Views

This document describes the details of the BON Egocentric vision dataset. BON denotes the initials of the locations where the dataset was collected; Barcelona (Spain); Oxford (UK); and Nairobi (Kenya). BON comprises first-person video, recorded when subjects were conducting common office activities. The preceding version of this dataset, FPV-O dataset has fewersubjects for only a single location (Barcelona). To develop a location agnostic framework, data from multiple locations and/or office settings is essential.

Instructions: 

Instructions are available on the attached document

Categories:
146 Views

The University of Turin (UniTO) released the open-access dataset Stoke collected for the homonymous Use Case 3 in the DeepHealth project (https://deephealth-project.eu/). UniToBrain is a dataset of Computed Tomography (CT) perfusion images (CTP).

Instructions: 

Visit https://github.com/EIDOSlab/UC3-UNITOBrain to have a full companion code where a U-Net model is trained over the dataset.

Categories:
329 Views

Pages