Datasets
Standard Dataset
VLBiasBench
- Citation Author(s):
- Submitted by:
- Sibo Wang
- Last updated:
- Fri, 11/29/2024 - 02:49
- DOI:
- 10.21227/wq9g-xt25
- License:
- Categories:
- Keywords:
Abstract
The emergence of Large Vision-Language Models (LVLMs) marks significant strides towards achieving general artificial intelligence.
However, these advancements are accompanied by concerns about biased outputs, a challenge that has yet to be thoroughly explored.
Existing benchmarks are not sufficiently comprehensive in evaluating biases due to their limited data scale, single questioning format and narrow sources of bias.
To address this problem, we introduce VLBiasBench, a comprehensive benchmark designed to evaluate biases in LVLMs.
VLBiasBench, features a dataset that covers nine distinct categories of social biases, including age, disability status, gender, nationality, physical appearance, race, religion, profession, social economic status, as well as two intersectional bias categories: race × gender and race × social economic status.
To build a large-scale dataset, we use Stable Diffusion XL model to generate 46,848 high-quality images, which are combined with various questions to creat 128,342 samples.
These questions are divided into open-ended and close-ended types, ensuring thorough consideration of bias sources and a comprehensive evaluation of LVLM biases from multiple perspectives.
We conduct extensive evaluations on 15 open-source models as well as two advanced closed-source models, yielding new insights into the biases present in these models.
The dataset file primarily consists of two parts: close-ended evaluation data and open-ended evaluation data. When using the dataset, input the images from the file and the questions from the annotations into the vision-language model, and then evaluate using the assessment methods we have provided.
Comments
VLBiasBench