File size: 6,032 Bytes
c1537b9
8aaca4a
 
 
 
 
 
 
c1537b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8eec6e9
c1537b9
8eec6e9
 
c1537b9
 
 
 
 
400e313
85a0c6e
 
8eec6e9
8aaca4a
6c2f58f
 
b728d7e
6c2f58f
400e313
 
 
8aaca4a
 
400e313
 
 
 
c4435e5
 
6c2f58f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5e242bd
 
 
 
 
 
 
 
6c2f58f
400e313
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
language:
- en
license: cc-by-sa-4.0
task_categories:
- visual-question-answering
- image-text-to-text
pretty_name: VisualOverload
dataset_info:
  features:
  - name: image
    dtype: image
  - name: question_id
    dtype: string
  - name: question_type
    dtype: string
  - name: question
    dtype: string
  - name: options
    dtype: string
  - name: difficulty
    dtype: string
  - name: category
    dtype: string
  - name: default_prompt
    dtype: string
  splits:
  - name: test
    num_bytes: 9393666010.68
    num_examples: 2720
  download_size: 630547630
  dataset_size: 9393666010.68
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
arxiv: 2509.25339
tags:
- art
---

# VisualOverload
<p align="center">
<img src="https://github.com/paulgavrikov/visualoverload/blob/main/assets/logo.jpg?raw=true" width="400">
</p>

<p align="center">
[<a href="http://arxiv.org/abs/2509.25339">📚 Paper</a>] 
[<a href="https://github.com/paulgavrikov/visualoverload">💻 Code</a>]
[<a href="https://paulgavrikov.github.io/visualoverload/">🌐 Project Page</a>]
[<a href="https://huggingface.co/spaces/paulgavrikov/visualoverload-submit">🏆 Leaderboard</a>]
[<a href="https://huggingface.co/spaces/paulgavrikov/visualoverload-submit">🎯 Online Evaluator</a>]
</p>

Is basic visual understanding really solved in state-of-the-art VLMs? We present VisualOverload, a slightly different visual question answering (VQA) benchmark comprising 2,720 question–answer pairs, with privately held ground-truth responses. Unlike prior VQA datasets that typically focus on near global image understanding, VisualOverload challenges models to perform simple, knowledge-free vision tasks in densely populated (or, overloaded) scenes. Our dataset consists of high-resolution scans of public-domain paintings that are populated with multiple figures, actions, and unfolding subplots set against elaborately detailed backdrops. We manually annotated these images with questions across six task categories to probe for a thorough understanding of the scene. We hypothesize that current benchmarks overestimate the performance of VLMs, and encoding and reasoning over details is still a challenging task for them, especially if they are confronted with densely populated scenes. Indeed, we observe that even the best model (o3) out of 37 tested models only achieves 19.6% accuracy on our hardest test split and overall 69.5% accuracy on all questions. Beyond a thorough evaluation, we complement our benchmark with an error analysis that reveals multiple failure modes, including a lack of counting skills, failure in OCR, and striking logical inconsistencies under complex tasks. Altogether, VisualOverload exposes a critical gap in current vision models and offers a crucial resource for the community to develop better models.


## 📂 Load the dataset

The easiest way to load the dataset is to use HuggingFace's `datasets`.

```python
from datasets import load_dataset

vol_dataset = load_dataset("paulgavrikov/visualoverload")
```

Each sample contains the following fields

- `question_id`: Unique identifier of each question. 
- `image`: A PIL JPEG image. Most of our images match the total pixel count of 4k (3840x2160 px) in different aspect ratios. 
- `question`: A question about the image.
- `question_type`: Type of question. Will be one of `choice` (response expected to be "A", "B", "C", or "D"), `counting` (freeform), or `ocr` (freeform). You can use this information to request a suitable output format. 
- `options`: This is the list of options for `question_type=choice` and empty otherwise. Please treat the options as answers options `A, B, C, D` (4 options) or `A, B` (2 options).
- `difficulty`: Meta-data about the difficulty of the question. One of `easy`, `medium`, or `hard`.
- `category`:  Meta-data about the question task. One of `activity`, `attributes`, `counting`, `ocr`, `reasoning`, or `scene`.
- `default_prompt`: You can use this prompt to stay compliant with our results. It is a simple combination of the question and answers, with some additional output format constraints. This should work well for most models.

## 🎯 Evaluate your model

Please see [GitHub](https://github.com/paulgavrikov/visualoverload/) for an example evaluation script that generates a correct submission JSON.

All of our ground truth labels are private. The only way to score your submission is to use the [evaluation server](https://huggingface.co/spaces/paulgavrikov/visualoverload-submit). You will need to sign in with a HuggingFace account.  

Your predictions should be a list of dictionaries, each containing an `question_id` field and a `response` field. For multiple choice questions, the `response` field should contain the predicted answer choice. For open-ended questions, the `response` field should contain the option letter (A-D). We will apply simple heuristics to clean the responses, but please ensure they are as accurate as possible.

Example: 
```
[
    {"question_id": "28deb79e", "response": "A"}, 
    {"question_id": "73cbabd7", "response": "C"}, 
    ...
]
```
## 🏆 Submit to the leaderboard
We welcome all submissions for model *or* method (including prompting-based) to our dataset. Please create a [GitHub issue](https://github.com/paulgavrikov/visualoverload/issues) following the template and include your predictions as JSON. 


## 📝 License

Our dataset is licensed under CC BY-SA 4.0. All images are based on artwork that is royalty-free public domain (CC0).

## 📚 Citation

```latex
@misc{gavrikov2025visualoverload,
      title={VisualOverload: Probing Visual Understanding of VLMs in Really Dense Scenes}, 
      author={Paul Gavrikov and Wei Lin and M. Jehanzeb Mirza and Soumya Jahagirdar and Muhammad Huzaifa and Sivan Doveh and Serena Yeung-Levy and James Glass and Hilde Kuehne},
      year={2025},
      eprint={2509.25339},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2509.25339}, 
}
```