Computer Vision
Understanding causes and effects in mechanical systems is an essential component of reasoning in the physical world. This work poses a new problem of counterfactual learning of object mechanics from visual input. We develop the COPHY benchmark to assess the capacity of the state-of-the-art models for causal physical reasoning in a synthetic 3D environment and propose a model for learning the physical dynamics in a counterfactual setting.
- Categories:
Pedestrian detection has never been an easy task for computer vision and automotive industry. Systems like the advanced driver assistance system (ADAS) highly rely on far infrared (FIR) data captured to detect pedestrians at nighttime. The recent development of deep learning-based detectors has proven the excellent results of pedestrian detection in perfect weather conditions. However, it is still unknown what is the performance in adverse weather conditions.
- Categories:
ADAM is organized as a half day Challenge, a Satellite Event of the ISBI 2020 conference in Iowa City, Iowa, USA.
- Categories:
The data set includes three sub-data sets, namely the DAGM2007 data set, the ground crack data set, and the Yibao bottle cap defect data set, which are divided into a training set and a test set, in which the positive and negative samples are unbalanced.
- Categories:
Nextmed project is a software platform for the segmentation and visualization of medical images. It consist on a series of different automatic segmentation algorithms for different anatomical structures and a platform for the visualization of the results as 3D models.
This dataset contains the .obj and .nrrd files that correspond to the results of applying our automatic lung segmentation algorithm to the LIDC-IDRI dataset.
This dataset relates to 718 of the 1012 LIDC-IDRI scans.
- Categories:
Egocentric vision is important for environment-adaptive control and navigation of humans and robots. Here we developed ExoNet, the largest open-source dataset of wearable camera images of real-world walking environments. The dataset contains over 5.6 million RGB images of indoor and outdoor environments, which were collected during summer, fall, and winter. 923,000 of the images were human-annotated using a 12-class hierarchical labelling architecture.
- Categories: