Image Processing

This is a dataset of 120 error-concealed video clips.  The clips were generated from 6 CIF, 6 HD and 6 Full-HD test video sequences. Each of those sequences was error concealed with 4 Error Concealment (EC) techniques: Motion Copy, Motion Vector Extrapolation, Decoder Motion Vector Estimation (DMVE) + Boundary Matching Algorithm (BMA), and Adaptive Error Concealment Order Determination (AECOD). The dataset also includes the original (loss free) video clips, as well as the subjective ranking of the error-concealed videos.


The original dataset SECOM is obtained from the the UC Irvine Machine Learning Repository ( Then, each
sample is transformed to an image, with each pixel representing a feature. Therefore, image processing mechanisms such as convolutionary neural networks can be utilized for classification.


Subpixel classification (SPC) extracts meaningful information on land-cover classes from the mixed pixels.However, the major challenges for SPC are to obtain reliable soft reference data (RD), use apt input data, and achieve maximum accuracy. This article addresses these issues and applies the support vector machine (SVM) to retrieve the subpixel estimates of glacier facies (GF) using high radiometric-resolution Advanced Wide Field Sensor (AWiFS) data. Precise quantification of GF has fundamental importance in the glaciological research.


Pressing demand of workload along with social media interaction leads to diminished alertness during work hours. Researchers attempted to measure alertness level from various cues like EEG, EOG, Video-based eye movement analysis, etc. Among these, video-based eyelid and iris motion tracking gained much attention in recent years. However, most of these implementations are tested on video data of subjects without spectacles. These videos do not pose a challenge for eye detection and tracking.


CUPSNBOTTLES is an object data set, recorded by a mobile service robot. There are 10 object classes, each with a varying number of samples. Additionally, there is a clutter class, containing samples where the object detector failed.


BraTS has always been focusing on the evaluation of state-of-the-art methods for the segmentation of brain tumors in multimodal magnetic resonance imaging (MRI) scans. BraTS 2019 utilizes multi-institutional pre-operative MRI scans and focuses on the segmentation of intrinsically heterogeneous (in appearance, shape, and histology) brain tumors, namely gliomas. Furthemore, to pinpoint the clinical relevance of this segmentation task, BraTS’19 also focuses on the prediction of patient overall survival, via integrative analyses of radiomic features and machine learning algorithms.

Last Updated On: 
Fri, 02/28/2020 - 06:31

Along with the increasing use of unmanned aerial vehicles (UAVs), large volumes of aerial videos have been produced. It is unrealistic for humans to screen such big data and understand their contents. Hence methodological research on the automatic understanding of UAV videos is of paramount importance.


This is a dataset having paired thermal-visual images collected over 1.5 years from different locations in Chitrakoot, India and Prayagraj, India. The images can be broadly classified into greenery, urban, historical buildings and crowd data.

The crowd data was collected from the Maha Kumbh Mela 2019, Prayagraj, which is the largest religious fair in the world and is held every 6 years.



Double-identity fingerprint is a fake fingerprint created by aligning two fingerprints for maximum ridge similarity and then joining them along an estimated cutline such that relevant features of both fingerprints

are present on either sides of the cutline. The fake fingerprint containing the features of the criminal and his innocuous accomplice can be enrolled with an electronic machine readable travel document and later used to cross the automated


This dataset contains the images used in the paper "Fine-tuning a pre-trained Convolutional Neural Network Model to translate American Sign Language in Real-time". 
M. E. Morocho Cayamcela and W. Lim, "Fine-tuning a pre-trained Convolutional Neural Network Model to translate American Sign Language in Real-time," 2019 International Conference on Computing, Networking and Communications (ICNC), Honolulu, HI, USA, 2019, pp. 100-104.