Irrelevance Robust Visual Question Answering (IR-VQA)

- Citation Author(s):
-
Jinhui YangMing JiangQi Zhao
- Submitted by:
- Jinhui Yang
- Last updated:
- DOI:
- 10.21227/ecrz-1412
- Categories:
- Keywords:
Abstract
Large Vision-Language Models (LVLMs) struggle with distractions, particularly in the presence of irrelevant visual or textual inputs. This paper introduces the Irrelevance Robust Visual Question Answering (IR-VQA) benchmark to systematically evaluate and mitigate this ``multimodal distractibility". IR-VQA targets three key paradigms: irrelevant visual contexts in image-independent questions, irrelevant textual contexts in image-dependent questions, and text-only distractions. Our experiments reveal that even state-of-the-art models like GPT-4o exhibit significant drops in accuracy and reasoning due to distraction-induced inconsistencies. To address this challenge, we present a novel methodology with the following components. First, we introduce new evaluation metrics, Positive Consistency (PC) and Negative Consistency (NC), to better assess model robustness under distractions. Next, we show that finetuning on our dataset demonstrates significant performance improvement in both traditional benchmarks and IR-VQA, highlighting the value of our dataset in enhancing model reliability and revealing deeper insights into multimodal interactions. This work paves the way for the development of more robust LVLMs for real-world applications.