Datasets:
Improve dataset card: Add metadata, links, abstract, sample usage, and citation
Browse filesThis PR significantly enhances the dataset card for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" by providing comprehensive information for better discoverability and usability.
Key changes include:
- **Metadata**: Added `task_categories: ['image-text-to-text']`, `language: ['en']`, and relevant `tags` (`vlm`, `spatial-reasoning`, `attention`). The `dataset_info` and `configs` have been moved from top-level YAML to a dedicated "Dataset Structure" section in the Markdown content for clearer separation of general metadata and dataset schema.
- **Paper & Code Links**: Included direct links to the paper ([https://huggingface.co/papers/2503.01773](https://huggingface.co/papers/2503.01773)) and the GitHub repository ([https://github.com/shiqichen17/AdaptVis](https://github.com/shiqichen17/AdaptVis)).
- **Abstract**: The full paper abstract has been added to provide immediate context for the dataset.
- **Visual**: Embedded the main image from the GitHub repository.
- **Sample Usage**: Detailed instructions from the GitHub README are provided, including environment setup, data downloading, and running experiments with code snippets and an argument table. A `load_dataset` example using the `datasets` library is also included for easy integration with Hugging Face workflows.
- **Dataset Structure**: The existing `dataset_info` and `configs` have been moved to a new "Dataset Structure" section in Markdown, clarifying the schema of the different dataset configurations.
- **Citation**: The BibTeX citation from the paper has been added.
These updates make the dataset card much more informative and accessible for researchers on the Hugging Face Hub.
|
@@ -1,4 +1,99 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
dataset_info:
|
| 3 |
- config_name: COCO_one_obj
|
| 4 |
features:
|
|
@@ -121,4 +216,18 @@ configs:
|
|
| 121 |
data_files:
|
| 122 |
- split: test
|
| 123 |
path: VG_two_obj/test-*
|
| 124 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
task_categories:
|
| 3 |
+
- image-text-to-text
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
tags:
|
| 7 |
+
- vlm
|
| 8 |
+
- spatial-reasoning
|
| 9 |
+
- attention
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas
|
| 13 |
+
|
| 14 |
+
This repository provides datasets associated with the paper [Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas](https://huggingface.co/papers/2503.01773).
|
| 15 |
+
|
| 16 |
+
Code: [https://github.com/shiqichen17/AdaptVis](https://github.com/shiqichen17/AdaptVis)
|
| 17 |
+
|
| 18 |
+
## Abstract
|
| 19 |
+
Large Vision Language Models (VLMs) have long struggled with spatial reasoning tasks. Surprisingly, even simple spatial reasoning tasks, such as recognizing "under" or "behind" relationships between only two objects, pose significant challenges for current VLMs. In this work, we study the spatial reasoning challenge from the lens of mechanistic interpretability, diving into the model's internal states to examine the interactions between image and text tokens. By tracing attention distribution over the image through out intermediate layers, we observe that successful spatial reasoning correlates strongly with the model's ability to align its attention distribution with actual object locations, particularly differing between familiar and unfamiliar spatial relationships. Motivated by these findings, we propose ADAPTVIS based on inference-time confidence scores to sharpen the attention on highly relevant regions when confident, while smoothing and broadening the attention window to consider a wider context when confidence is lower. This training-free decoding method shows significant improvement (e.g., up to a 50 absolute point improvement) on spatial reasoning benchmarks such as WhatsUp and VSR with negligible cost. We make code and data publicly available for research purposes at [https://github.com/shiqichen17/AdaptVis](https://github.com/shiqichen17/AdaptVis).
|
| 20 |
+
|
| 21 |
+
<p align="center">
|
| 22 |
+
<img src="https://github.com/shiqichen17/AdaptVis/blob/main/figures/main.png" width="800">
|
| 23 |
+
</p>
|
| 24 |
+
|
| 25 |
+
## Datasets
|
| 26 |
+
This repository provides the datasets used in the paper. The code to load and evaluate each dataset is available in `dataset_zoo/aro_datasets.py` in the GitHub repository. The Question and Answering data is located in `prompt/`.
|
| 27 |
+
|
| 28 |
+
The datasets are categorized for evaluating VLMs' performance on spatial reasoning tasks. They include:
|
| 29 |
+
* `COCO_one_obj`
|
| 30 |
+
* `COCO_two_obj`
|
| 31 |
+
* `Controlled_A`
|
| 32 |
+
* `Controlled_B`
|
| 33 |
+
* `VG_one_obj`
|
| 34 |
+
* `VG_two_obj`
|
| 35 |
+
|
| 36 |
+
## Sample Usage
|
| 37 |
+
|
| 38 |
+
### Load with Hugging Face `datasets`
|
| 39 |
+
You can easily load any configuration of this dataset using the `datasets` library:
|
| 40 |
+
|
| 41 |
+
```python
|
| 42 |
+
from datasets import load_dataset
|
| 43 |
+
|
| 44 |
+
# Load a specific configuration, e.g., 'COCO_one_obj'
|
| 45 |
+
dataset = load_dataset("AdaptVis/all_datasets", "COCO_one_obj")
|
| 46 |
+
|
| 47 |
+
# Access the test split
|
| 48 |
+
test_data = dataset["test"]
|
| 49 |
+
|
| 50 |
+
# Print an example
|
| 51 |
+
print(test_data[0])
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
### Running Experiments with the Codebase
|
| 55 |
+
To set up the environment and run experiments for `scaling_vis` and `adapt_vis` methods from the original repository, follow these steps:
|
| 56 |
+
|
| 57 |
+
**Setting Up the environment**
|
| 58 |
+
|
| 59 |
+
```bash
|
| 60 |
+
git clone https://github.com/shiqichen17/AdaptVis.git
|
| 61 |
+
cd AdaptVis
|
| 62 |
+
mkdir data
|
| 63 |
+
mkdir output
|
| 64 |
+
pip install -r requirements.txt
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
**Downloading the data**
|
| 68 |
+
|
| 69 |
+
The data can be downloaded automatically when running experiments by setting `--download=True` (while running `python main_aro.py` or instantiating the dataset directly). Alternatively, you can download it manually from the Hugging Face Hub (this repository) or the provided [Google Drive link](https://drive.google.com/drive/u/3/folders/164q6X9hrvP-QYpi3ioSnfMuyHpG5oRkZ) in the GitHub README.
|
| 70 |
+
|
| 71 |
+
**Running an example experiment**
|
| 72 |
+
|
| 73 |
+
You can quickly run an example experiment using the provided `run.sh` script:
|
| 74 |
+
|
| 75 |
+
```bash
|
| 76 |
+
bash run.sh
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
**Arguments**
|
| 80 |
+
|
| 81 |
+
The `run.sh` script accepts various arguments to control the dataset, model, and method:
|
| 82 |
+
|
| 83 |
+
| Argument | Example | Description |
|
| 84 |
+
|---|---|---|
|
| 85 |
+
| `dataset` | `Controlled_Images_A` | Specifies the dataset you want to evaluate. Can choose from `Controlled_Images_A, Controlled_Images_B..`. |
|
| 86 |
+
| `model` | `llava1.5` | Specifies the model you want to use. |
|
| 87 |
+
| `method` | `scaling_vis` | The method for evaluation. Can choose from `"scaling_vis"` or `"adapt_vis"`. |
|
| 88 |
+
| `weight` | `1.2` | Coefficient for Scaling_vis. Can set from `[0, 0.5, 0.8, 1.2, 1.5, 2.0]`. |
|
| 89 |
+
| `weight1` | `0.5` | Coefficient for AdaptVis. Can set from `[0.5, 0.8]`. |
|
| 90 |
+
| `weight2` | `1.2` | Coefficient for AdaptVis. Can set from `[1.2, 1.5, 2.0]`. |
|
| 91 |
+
| `threshold` | `0.3` | Threshold for AdaptVis. |
|
| 92 |
+
|
| 93 |
+
## Dataset Structure
|
| 94 |
+
The dataset contains multiple configurations, each with `id`, `question`, `answer`, and an image identifier (`image_id` or `image_path`). Below is a summary of the dataset splits and features:
|
| 95 |
+
|
| 96 |
+
```yaml
|
| 97 |
dataset_info:
|
| 98 |
- config_name: COCO_one_obj
|
| 99 |
features:
|
|
|
|
| 216 |
data_files:
|
| 217 |
- split: test
|
| 218 |
path: VG_two_obj/test-*
|
| 219 |
+
```
|
| 220 |
+
|
| 221 |
+
## Citation
|
| 222 |
+
If you use this code or data, please consider citing our paper:
|
| 223 |
+
```bibtex
|
| 224 |
+
@misc{chen2025spatialreasoninghardvlms,
|
| 225 |
+
title={Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas},
|
| 226 |
+
author={Shiqi Chen and Tongyao Zhu and Ruochen Zhou and Jinghan Zhang and Siyang Gao and Juan Carlos Niebles and Mor Geva and Junxian He and Jiajun Wu and Manling Li},
|
| 227 |
+
year={2025},
|
| 228 |
+
eprint={2503.01773},
|
| 229 |
+
archivePrefix={arXiv},
|
| 230 |
+
primaryClass={cs.CL},
|
| 231 |
+
url={https://arxiv.org/abs/2503.01773},
|
| 232 |
+
}
|
| 233 |
+
```
|