Results(including reported and extra results) for LSstab. Please refer to our paper "Efficient real-time video stabilization with a novel least squares formulation and parallel AC-RANSAC".
Stabilization results for LSstab. Please refer to our paper"Efficient real-time video stabilization with a novel
least squares formulation and parallel AC-RANSAC"
Stabilization results include:
(1) stabilized videos reported in the paper
(2) extra stabilized videos
(3) Challenging videos that LStab fails to stabilize.
This is a dataset of diabetic foot. We are preparing to publish this dataset.
Deep learning undoubtedly has had a huge impact on the computer vision community in recent years. In light field imaging, machine learning-based applications have significantly outperformed their conventional counterparts. Furthermore, multi- and hyperspectral light fields have shown promising results in light field-related applications such as disparity or shape estimation. Yet, a multispectral light field data\-set, enabling data-driven approaches, is missing. Therefore, we propose a new synthetic multispectral light field dataset with depth and disparity ground truth.
When using this dataset, please cite our corresponding paper:
Maximilian Schambach and Michael Heizmann:
"A Multispectral Light Field Dataset and Framework for Light Field Deep Learning"
IEEE Access, vol. 8, pp. 193492-193502, 2020
The dataset consists of 500 randomly generated scenes as well as 7 hand-crafted scenes for detailed performance evaluation.
The scenes are rendered as multispectral light fields of shape (11, 11, 512, 512, 13) with depth and disparity ground truth of every subaperture view.
The lightfields are provided with 16bit uint precision, the depth and disparities with 32bit float precision.
The scenes are rendered in two different camera configurations: one corresponding to a light field camera in the unfocused design (plenoptic 1.0) with a focused main lens (annotated with "F") and one where the main lens is focussed at infinity (annotated with "INF") )which is equivalent to a camera array with parallel optical axes. In the "F" configuration, disparities range from ca. -2.5px to 3px where a disparity of 0px corresponds to the focal plane. In the "INF" configuration, the focus is set to infinity, hence all disparities are positive.
We provide the raw rendered data, the abstract source files (so rendering of additional ground truth is possible) as well as multiple pre-patch and converted versions.
We provide the following dataset files for downloading:
Contains the complete RAW rendered data (both the F and INF camera configuration), including the multispectral light fields in the ENVI format, traced depth maps and converted disparity maps for every subaperture view in the PFM format.
The light fields are of shape (11, 11, 512, 512, 13), the depth and disparity of shape (11, 11, 512, 512, 1) and saved as 2D images in the Subaperture Image view.
To load the light fields and disparities, you may use our Python library plenpy.
To patch the RAW data into a .h5 dataset, see the provided Python scripts contained in SCRIPTS.zip .
We provide pre-patched versions of the dataset (see below). If the pre-patched version do not fit your needs (e.g. you need a different spatial resolution), use our provided patch script.
The patched .h5 data can then directly be used with our deep learning framework LFCNN.
Contain the hand-crafted dataset challenges. Includes the RAW rendered data, as well as conversions to Numpy's .npy format in the case of (11, 11) and (9,9) angular resolution.
Furhter contains composed h5 files to be used directly with our deep learning framework LFCNN.
Identical to CHALLENGES_MULTISPECTRAL but converted to RGB.
Multispectral dataset in the "F" configuration, patched to (11, 11, 36, 36, 13) light field patches with the corrresponding disparity map of the central view.
Multispectral dataset in the "F" configuration, patched to (9, 9, 36, 36, 13) light field patches with the corrresponding disparity map of the central view.
Same as previous, but in the "INF" camera configuration.
Same as DATASET_MULTISPECTRAL_PATCHED_F_9x9_36x36.zip but converted to RGB.
Same as DATASET_MULTISPECTRAL_PATCHED_INF_9x9_36x36.zip but converted to RGB.
The SCRIPTS.zip file contains a script to convert a set of light fields to a patched set saved in the .h5 format.
Use these scripts to patch the raw dataset into light field patches of a self-defined shape.
See the scsript for comments on usage.
ALL-IDB (Acute Lymphoblastic Leukemia) Image Database for Image Processing
ALL-IDB dataset comprises of two subsets among them one subset has 260 segmented lymphocytes of them 130 belongs to the leukaemia and the remaining 130 belongs to the non leukaemuia class it requires only classification. second subset has around 108 non segmented blood images that belongs to the leukaemia and non leukaemia groups thus requires segmentation and classification.
Optical Character Recognition (OCR) system is used to convert the document images, either printed or handwritten, into its electronic counterpart. But dealing with handwritten texts is much more challenging than printed ones due to erratic writing style of the individuals. Problem becomes more severe when the input image is doctor's prescription. Before feeding such image to the OCR engine, the classification of printed and handwritten texts is a necessity as doctor's prescription contains both handwritten and printed texts which are to be processed separately.
Annotated image dataset of household objects from the RoboFEI@Home team
This data set contains two sets of pictures of household objects, created by the RoboFEI@Home team to develop object detection systems for a domestic robot.
The first data set was created with objects from a local supermarket. Product brands are typical from Brazil. The second data set is composed of objects from the RoboCup@Home 2018 OPL competition.
This data set contains two separate sets of annotated images. Common features of the image sets:
- Images are saved in JPG format
- Annotations are made with labelImg
- Both sets contain videos in MP4 format to test trained detection models
166 annotated images with 1028 objects of the following 13 classes:
There are also 28 videos for testing, shot with multiple smartphones.
388 annotated images with 1737 objects of the following 20 classes:
There is also a single long video and 398 unannotated images for testing.
The data uploaded here shall support the paper
Decision Tree Analysis of ...
which has been submitted to IEEE Transactions on Medical Imaging (2020, September 25) by the authors
Julian Mattes, Wolfgang Fenz, Stefan Thumfart, Gerhard Haitchi, Pierre Schmit, Franz A. Fellner
During review the data shall only be visible for the reviewers of this paper. Afterwards this abstract will be modified and complemented and a dataset image will be uploaded.
Dataset with images of soccer ball acquired by a humanoid robot competing in the RoboCup Humanoid Kidsize League.
Files in jpeg format.
Github source code also available at:
These last decades, Earth Observation brought quantities of new perspectives from geosciences to human activity monitoring. As more data became available, artificial intelligence techniques led to very successful results for understanding remote sensing data. Moreover, various acquisition techniques such as Synthetic Aperture Radar (SAR) can also be used for problems that could not be tackled only through optical images. This is the case for weather-related disasters such as floods or hurricanes, which are generally associated with large clouds cover.
The dataset is composed of 336 sequences corresponding to areas in West and South-East Africa, Middle-East, and Australia. Each time series is located in a given folder named with the sequence ID (0001... 0336).
Two json files, S1list.json and S2list.json are provided to describe respectively the Sentinel-1 and Sentinel-2 images.The keys are the total number of images in the sequence, the folder name, the geography of the observed area, and the description of each image in the series. The SAR images description contains also the URLs to download the images.Each image is described by its acquisition date, its label (FLOODING: boolean), a boolean (FULL-DATA-COVERAGE: boolean) indicating if the area is fully or partially imaged, and the file prefix. For SAR images the orbit (ASCENDING or DESCENDING) is also indicated.
The Sentinel-2 images were obtained from the Mediaeval 2019 Multimedia Satellite Task  and are provided with Level 2A atmospheric correction. For one acquisition, there are 12 single-channel raster images provided corresponding to the different spectral bands.
The Sentinel-1 images were added to the dataset. The images are provided with radiometric calibration and range doppler terrain correction based on the SRTM digital elevation model. For one acquisition, two raster images are available corresponding to the polarimetry channels VV and VH.
The original dataset was split into 269 sequences for the train and 68 sequences for the test. Here all sequences are in the same folder.
To use this dataset please cite the following papers:
Flood Detection in Time Series of Optical and SAR Images, C. Rambour,N. Audebert,E. Koeniguer,B. Le Saux, and M. Datcu, ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020, 1343--1346
The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop
This dataset contains modified Copernicus Sentinel data [2018-2019], processed by ESA.
 The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop
Our complex street scene(CSS) containing strong light and heavy shadow scenes mainly comes from the Kitti dataset. Our datasets are captured by driving around Karlsruhe's mid-size city, in rural areas, and on highways. We equipped a standard station wagon with two high-resolution color and grayscale video cameras. Up to 15 cars and 30 pedestrians are visible per image. We aim to verify the performance of the algorithm in specific and complex street scenes.