Datasets
Standard Dataset
image
- Citation Author(s):
- Submitted by:
- jiao li
- Last updated:
- Fri, 01/03/2025 - 09:55
- DOI:
- 10.21227/7zvx-s344
- Data Format:
- License:
- Categories:
- Keywords:
Abstract
Existing RGB-T tracking research in general suffers from the challenge of a shortage of training data. Fundamental data enhancement methods (e.g., rotation, translation, etc.) have limited effectiveness, despite the presentation of approaches as Generative Adversarial Networks (GANs), which have not yet been insufficiently tackled for their semantic distortion and information loss arising from cross-modal disparities, high computational cost, and high time overhead.Additionally, as Transformer gains popularity in the vision field, it has been widely recognized that a growing proportion of work has begun to extract single-modal features using the self-attention mechanism as well as inter-modal information with the help of the cross-modal cross-attention mechanism, whereas the intractable noise in a single modality remains open to be tackled, which results in the transfer of noise to the other modalities during the information fusion process, thus affecting the accuracy and robustness of the target tracking. To overcome the above limitations, we propose a gradient-based data enhancement method (G-MDE) that iteratively generates RGB and TIR data with perturbation information and retrains the model to enhance its robustness. Furthermore, we design a multi-modal shared information interaction module (SIM) for selecting critical information within modalities and minimize the interference of futile information. Extensive experiments on three popular RGB-T tracking benchmarks demonstrate that our method achieves new state-of-the-art performance.
Existing RGB-T tracking research in general suffers from the challenge of a shortage of training data. Fundamental data enhancement methods (e.g., rotation, translation, etc.) have limited effectiveness, despite the presentation of approaches as Generative Adversarial Networks (GANs), which have not yet been insufficiently tackled for their semantic distortion and information loss arising from cross-modal disparities, high computational cost, and high time overhead.Additionally, as Transformer gains popularity in the vision field, it has been widely recognized that a growing proportion of work has begun to extract single-modal features using the self-attention mechanism as well as inter-modal information with the help of the cross-modal cross-attention mechanism, whereas the intractable noise in a single modality remains open to be tackled, which results in the transfer of noise to the other modalities during the information fusion process, thus affecting the accuracy and robustness of the target tracking. To overcome the above limitations, we propose a gradient-based data enhancement method (G-MDE) that iteratively generates RGB and TIR data with perturbation information and retrains the model to enhance its robustness. Furthermore, we design a multi-modal shared information interaction module (SIM) for selecting critical information within modalities and minimize the interference of futile information. Extensive experiments on three popular RGB-T tracking benchmarks demonstrate that our method achieves new state-of-the-art performance.
Documentation
Attachment | Size |
---|---|
Introduction.md | 9.37 KB |