Create README.md
Browse files
README.md
CHANGED
|
@@ -1,31 +1,46 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<p align="left">
|
| 2 |
+
<a href="https://github.com/fudan-zvg/spar.git">
|
| 3 |
+
<img alt="GitHub Code" src="https://img.shields.io/badge/Code-spar-black?&logo=github&logoColor=white" />
|
| 4 |
+
</a>
|
| 5 |
+
<a href="https://arxiv.org/abs/xxx">
|
| 6 |
+
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-spar-red?logo=arxiv" />
|
| 7 |
+
</a>
|
| 8 |
+
<a href="https://fudan-zvg.github.io/spar">
|
| 9 |
+
<img alt="Website" src="https://img.shields.io/badge/🌎_Website-spar-blue" />
|
| 10 |
+
</a>
|
| 11 |
+
</p>
|
| 12 |
+
|
| 13 |
+
# 🎯 SPAR-Bench-Tiny
|
| 14 |
+
|
| 15 |
+
> A lightweight subset of SPAR-Bench for **fast evaluation** of spatial reasoning in vision-language models (VLMs).
|
| 16 |
+
|
| 17 |
+
**SPAR-Bench-Tiny** contains **1,000 manually verified QA pairs** — 50 samples per task across **20 spatial tasks** — covering single-view and multi-view inputs.
|
| 18 |
+
|
| 19 |
+
This dataset mirrors the structure and annotation of the full [SPAR-Bench](https://huggingface.co/datasets/jasonzhango/SPAR-Bench), but is **10× smaller**, making it ideal for low-latency evaluation.
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
## 📥 Load with `datasets`
|
| 24 |
+
|
| 25 |
+
```python
|
| 26 |
+
from datasets import load_dataset
|
| 27 |
+
spar_tiny = load_dataset("jasonzhango/SPAR-Bench-Tiny")
|
| 28 |
+
```
|
| 29 |
+
## 🕹️ Evaluation
|
| 30 |
+
|
| 31 |
+
SPAR-Bench-Tiny uses the **same evaluation protocol and metrics** as the full [SPAR-Bench](https://huggingface.co/datasets/jasonzhango/SPAR-Bench).
|
| 32 |
+
|
| 33 |
+
We provide an **evaluation pipeline** in our [GitHub repository](https://github.com/hutchinsonian/spar), built on top of [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval).
|
| 34 |
+
|
| 35 |
+
## 📚 Bibtex
|
| 36 |
+
|
| 37 |
+
If you find this project or dataset helpful, please consider citing our paper:
|
| 38 |
+
|
| 39 |
+
```bibtex
|
| 40 |
+
@article{zhang2025from,
|
| 41 |
+
title={From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D},
|
| 42 |
+
author={Zhang, Jiahui and Chen, Yurui and Zhou, Yanpeng and Xu, Yueming and Huang, Ze and Mei, Jilin and Chen, Junhui and Yuan, Yujie and Cai, Xinyue and Huang, Guowei and Quan, Xingyue and Xu, Hang and Zhang, Li},
|
| 43 |
+
year={2025},
|
| 44 |
+
journal={arXiv preprint arXiv:xx},
|
| 45 |
+
}
|
| 46 |
+
```
|