Image Fusion

Multi-modality image fusion aims to combine diverse images into a single image that captures important targets, intricate texture details, and enables advanced visual tasks. Existing fusion methods often overlook the complementarity of difffferent modalities by treating source images uniformly and extracting similar features. This study introduces a distributed optimization model that leverages a collection of images and their signifificant features stored in distributed storage. To solve this model, we employ the distributed Alternating Direction Method of Multipliers (ADMM) algorithm.


Buildings are essential components of urban areas. While research on the extraction and 3D reconstruction of buildings is widely conducted, information on fine-grained roof types of buildings is usually ignored. This limits the potential of further analysis, e.g., in the context of urban planning applications. The fine-grained classification of building roof type from satellite images is a highly challenging task due to ambiguous visual features within the satellite imagery.

Last Updated On: 
Tue, 03/07/2023 - 11:58