Computer Vision
This dataset is used for arbitrary-orientation scene text detection, recognition and spotting.
- Categories:
About
Dataset described in:
Daudt, R.C., Le Saux, B., Boulch, A. and Gousseau, Y., 2019. Multitask learning for large-scale semantic change detection. Computer Vision and Image Understanding, 187, p.102783.
This dataset contains 291 coregistered image pairs of RGB aerial images from IGS's BD ORTHO database. Pixel-level change and land cover annotations are provided, generated by rasterizing Urban Atlas 2006, Urban Atlas 2012, and Urban Atlas Change 2006-2012 maps.
The dataset is split into five parts:
- 2006 images
- Categories:
Master data has played a significant role in improving operational efficiencies and has attracted the attention of many large businesses over the decade. Recent professional searches have also proved a significant growth in the practice and research of managing these master data assets.
- Categories:
Pressing demand of workload along with social media interaction leads to diminished alertness during work hours. Researchers attempted to measure alertness level from various cues like EEG, EOG, Video-based eye movement analysis, etc. Among these, video-based eyelid and iris motion tracking gained much attention in recent years. However, most of these implementations are tested on video data of subjects without spectacles. These videos do not pose a challenge for eye detection and tracking.
- Categories:
Four fully annotated marine image datasets. The annotations are given as train and test splits that can be used to evaluate machine learning methods.
- Categories:
Along with the increasing use of unmanned aerial vehicles (UAVs), large volumes of aerial videos have been produced. It is unrealistic for humans to screen such big data and understand their contents. Hence methodological research on the automatic understanding of UAV videos is of paramount importance.
- Categories:
This is a dataset having paired thermal-visual images collected over 1.5 years from different locations in Chitrakoot, India and Prayagraj, India. The images can be broadly classified into greenery, urban, historical buildings and crowd data.
The crowd data was collected from the Maha Kumbh Mela 2019, Prayagraj, which is the largest religious fair in the world and is held every 6 years.
- Categories:
As developers create or analyze an application,they often want to visualize the code through some graphical notation that aids their understanding of the code’s structure or behavior. In order to do this, we develop a integrated debugger.The debugger first record the walkthrough of application as assembly instructions by dynamic way.Then compression mapping block transforms previous outcome into three-dimensional-linked list structure,which then transformed into tree structure by the improved suffix tree algorithm.
- Categories: