Image Processing

The is a dataset for indoor depth estimation that contains 1803 synchronized image triples (left, right color image and depth map), from 6 different scenes, including a library, some bookshelves, a conference room, a cafe, a study area, and a hallway. Among these images, 1740 high-quality ones are marked as high-quality imagery. The left view and the depth map are aligned and synchronized and can be used to evaluate monocular depth estimation models. Standard training/testing splits are provided.
- Categories:
The dataset contains high-resolution microscopy images and confocal spectra of semiconducting single-wall carbon nanotubes. Carbon nanotubes allow down-scaling of electronic components to the nano-scale. There is initial evidence from Monte Carlo simulations that microscopy images with high digital resolution show energy information in the Bessel wave pattern that is visible in these images. In this dataset, images from Silicon and InGaAs cameras, as well as spectra, give valuable insights into the spectroscopic properties of these single-photon emitters.
- Categories:
The dataset consists of 60285 character image files which has been randomly divided into 54239 (90%) images as training set 6046 (10%) images as test set. The collection of data samples was carried out in two phases. The first phase consists of distributing a tabular form and asking people to write the characters five times each. Filled-in forms were collected from around 200 different individuals in the age group 12-23 years. The second phase was the collection of handwritten sheets such as answer sheets and classroom notes from students in the same age group.
- Categories:
As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on the robustness of data-driven algorithms under diverse challenging conditions where trained models can possibly be depolyed. To achieve this goal, we introduced a large-sacle (1.M images) object recognition dataset (CURE-OR) which is among the most comprehensive datasets with controlled synthetic challenging conditions. In CURE
- Categories:
As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on the robustness of data-driven algorithms under diverse challenging conditions where trained models can possibly be depolyed. To achieve this goal, we introduced a large-sacle (~1.72M frames) traffic sign detection video dataset (CURE-TSD) which is among the most comprehensive datasets with controlled synthetic challenging conditions. The video sequences in the
- Categories:

The modified CASIA dataset is created for research topics like: perceptual image hash, image tampering detection, user-device physical unclonable function and so on.
- Categories:
As one of the research directions at OLIVES Lab @ Georgia Tech, we focus on the robustness of data-driven algorithms under diverse challenging conditions where trained models can possibly be depolyed.
- Categories:
This dataset includes all letters from Turkish Alphabet in two parts. In the first part, the dataset was categorized by letters, and the second part dataset was categorized by fonts. Both parts of dataset includes the features mentioned below.
-
72, 20 AND 8 POINT LETTERS
-
UPPER AND LOWER CASES
The all characters in Turkish Alphabet are included (a, b, c, ç, d, e, f, g, ğ, h, ı, i, j, k, l, m, n, o, ö, p, r, s, ş, t, u, ü, v, y, z).
- Categories: