These last decades, Earth Observation brought quantities of new perspectives from geosciences to human activity monitoring. As more data became available, artificial intelligence techniques led to very successful results for understanding remote sensing data. Moreover, various acquisition techniques such as Synthetic Aperture Radar (SAR) can also be used for problems that could not be tackled only through optical images. This is the case for weather-related disasters such as floods or hurricanes, which are generally associated with large clouds cover.
The dataset is composed of 336 sequences corresponding to areas in West and South-East Africa, Middle-East, and Australia. Each time series is located in a given folder named with the sequence ID (0001... 0336).
Two json files, S1list.json and S2list.json are provided to describe respectively the Sentinel-1 and Sentinel-2 images.The keys are the total number of images in the sequence, the folder name, the geography of the observed area, and the description of each image in the series. The SAR images description contains also the URLs to download the images.Each image is described by its acquisition date, its label (FLOODING: boolean), a boolean (FULL-DATA-COVERAGE: boolean) indicating if the area is fully or partially imaged, and the file prefix. For SAR images the orbit (ASCENDING or DESCENDING) is also indicated.
The Sentinel-2 images were obtained from the Mediaeval 2019 Multimedia Satellite Task  and are provided with Level 2A atmospheric correction. For one acquisition, there are 12 single-channel raster images provided corresponding to the different spectral bands.
The Sentinel-1 images were added to the dataset. The images are provided with radiometric calibration and range doppler terrain correction based on the SRTM digital elevation model. For one acquisition, two raster images are available corresponding to the polarimetry channels VV and VH.
The original dataset was split into 267 sequences for the train and 67 sequences for the test. Here all sequences are in the same folder.
To use this dataset please cite the following papers:
Flood Detection in Time Series of Optical and SAR Images, C. Rambour,N. Audebert,E. Koeniguer,B. Le Saux, and M. Datcu, ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020, 1343--1346
The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop
This dataset contains modified Copernicus Sentinel data [2018-2019], processed by ESA.
 The Multimedia Satellite Task at MediaEval2019, Bischke, B., Helber, P., Schulze, C., Srinivasan, V., Dengel, A.,Borth, D., 2019, In Proc. of the MediaEval 2019 Workshop
Three groups of defective sample images and the ground-truth images were artificially generated through some algorithms, including five types of defects, namely, broken end, hole, netting multiple, thin bar and thick bar.
The color fractal images with correlated RGB color components were generated using the midpoint displacement alogrithm, using vectorial increments in the RGB color space, according to a multivariate Gaussian distribution specified by the variance-covariance matrix. This data set contains two sets of 25 color fractal images with two color components, of varying complexity expressed as the color fractal dimension, as a function of (i) the Hurst coefficient that was varied from 0.1 to 0.9 in steps of 0.2 and (ii) the correlation coefficient between the red and green color channels.
This data set is for research purposes only. Please consider citing the paper entitled "Fractal Dimension of Color Fractal Images with Correlated Color Components", IEEE Transactions on Image Processing, 2020: https://doi.org/10.1109/TIP.2020.3011283
Microscopic image based analysis plays an important role in histopathological computer based diagnostics. Identification of childhood medulloblastoma and its proper subtype from biopsy tissue specimen of childhood tumor is an integral part for prognosis.The dataset is of Childhood medulloblastoma (CMB) biopsy samples. The images are of 10x and 100x microscopic magnifications, uploaded in separate folders. The images consist of normal brain tissue cell samples and CMB cell samples of different WHO defined subtypes. An excel sheet is also uploaded for ease of data description.
The dataset contains two folder of diffrent magnification images, i.e; 10x and 100x. The type of each image is described in the provided excel file. Each slide has a unique number and the number in bracket denotes that the corresponding image is of the single slide.
The dataset contains 2,400 vehicle images for license plate detection purposes. Images are taken from actively operating commercial cameras which are installed on a highway and in an entrance of a shopping mall. Images
contain generally one vehicle, but sometimes can contain two or more vehicles. For each image in pixel domain there exists two different images generated from encoded High Efficiency Video Coding (HEVC) stream using our method.
3 SUB SETS
•2,400 Pixel Domain Images
•2,400 HEVC Domain Images Generated from Our Block Partition Method
•2,400 HEVC Domain Images Generated from Our Prediction Based Method
•Each train test set contains 1,800 images.
•Each test set contains 600 images.
Images are given numeral names starting from 100,001 to 102,400 for each method. The same numbers are used to represent HEVC domain representations of pixel domain images.
For each image there exists another file which contains plate annotation information in YOLO format.
| | +---Test
| | \---Train
| | +---Test
| | \---Train
We introduce HUMAN4D, a large and multimodal 4D dataset that contains a variety of human activities simultaneously captured by a professional marker-based MoCap, a volumetric capture and an audio recording system. By capturing 2 female and 2 male professional actors performing various full-body movements and expressions, HUMAN4D provides a diverse set of motions and poses encountered as part of single- and multi-person daily, physical and social activities (jumping, dancing, etc.), along with multi-RGBD (mRGBD), volumetric and audio data. Despite the existence of multi-view color datasets c
* At this moment, the paper of this dataset is under review. The dataset is going to be fully published along with the publication of the paper, while in the meanwhile, more parts of the dataset will be uploaded.
The dataset includes multi-view RGBD, 3D/2D pose, volumetric (mesh/point-cloud/3D character) and audio data along with metadata for spatiotemporal alignment.
The full dataset is splitted per subject and per activity per modality.
There are also two benchmarking subsets, H4D1 for single-person and H4D2 for two-person sequences, respectively.
The fornats are:
- mRGBD: *.png
- 3D/2D poses: *.npy
- volumetric (mesh/point-cloud/): *.ply
- 3D character: *.fbx
- metadata: *.txt, *.json
This dataset is for light field image augmentaion. The dataset contains 100 pairs of light field image, each of them consists of "original" and "modified". "Original" is light field image with only background, "modified" is light field image with exactly same background and an object on it.
The first bit of light is the gesture of being, on a massive screen of the black panorama. A small point of existence, a gesture of being. The universal appeal of gesture is far beyond the barriers of languages and planets. These are the microtransactions of symbols and patterns which have traces of the common ancestors of many civilizations.Gesture recognition is important to make communication between the computer system and humans, in the present era many studies are going on regarding the gesture recognition systems.
This is an eye tracking dataset of 84 computer game players who played the side-scrolling cloud game Somi. The game was streamed in the form of video from the cloud to the player. The dataset consists of 135 raw videos (YUV) at 720p and 30 fps with eye tracking data for both eyes (left and right). Male and female players were asked to play the game in front of a remote eye-tracking device. For each player, we recorded gaze points, video frames of the gameplay, and mouse and keyboard commands.
- AVI offset represents the frame from which data gathering has been started.
- The 1st frame of each YUV file is the 901st frame of its corresponding AVI file.
- For detailed info and instructions, please see:
Hamed Ahmadi, Saman Zad Tootaghaj, Sajad Mowlaei, Mahmoud Reza Hashemi, and Shervin Shirmohammadi, “GSET Somi: A Game-Specific Eye Tracking Dataset for Somi”, Proc. ACM Multimedia Systems, Klagenfurt am Wörthersee, Austria, May 10-13 2016, 6 pages. DOI: 10.1145/2910017.2910616