Multi-modality image fusion aims to combine diverse images into a single image that captures important targets, intricate texture details, and enables advanced visual tasks. Existing fusion methods often overlook the complementarity of difffferent modalities by treating source images uniformly and extracting similar features. This study introduces a distributed optimization model that leverages a collection of images and their signifificant features stored in distributed storage. To solve this model, we employ the distributed Alternating Direction Method of Multipliers (ADMM) algorithm.