MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated Capabilities

Citation Author(s):
Weihao
Yu
Zhengyuan
Yang
Lingfeng
Ren
Linjie
Li
Jianfeng
Wang
Kevin
Lin
Chung-Ching
Lin
Zicheng
Liu
Lijuan Wang
Wang
Xinchao
Wang
Submitted by:
Weihao Yu
Last updated:
Sat, 12/21/2024 - 08:04
DOI:
10.21227/pvmd-s489
License:
0
0 ratings - Please login to submit your rating.

Abstract 

We propose MM-Vet v2, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the complicated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet v2, designed based on the insight that the intriguing ability to solve complicated tasks often stems from a generalist model being able to integrate different core vision-language (VL) capabilities. MM-Vet v2 defines 7 core VL capabilities and examines the 39 integrations of interest derived from their combinations. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet v2, finding that Claude 3.5 Sonnet is the best model with a score of 71.8, slightly outperforming GPT-4o which scored 71.0. Among open-weight models, InternVL2-Llama3-76B leads with a score of 68.4. Code and data are available at https://github.com/yuweihao/MM-Vet, and the online evaluator at https://huggingface.co/spaces/whyu/MM-Vet-v2_Evaluator.

Instructions: 

The images are in the "images" folder and the meta data is saved in "mm-vet-v2.json". An example of meta data is

```

    "v2_497": {

        "question": "Which drink has fewer total calories, the first or the second?<IMG>v2_497_0.jpg<IMG>v2_497_1.jpg",

        "answer": "second",

        "capability": [

            "rec",

            "ocr",

            "seq"

        ],

        "added_in": "v2"

```

where  The <IMG> tag acts as a delimiter, separating the textual content from the corresponding image path in the question.