Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
Libraries:
Datasets
pandas
License:
nielsr HF Staff commited on
Commit
b0b281f
·
verified ·
1 Parent(s): e97ce00

Improve dataset card: Add task category, paper, code, project links, and usage

Browse files

This PR significantly improves the dataset card for SGP-GenBench. It adds the `text-to-image` task category, links to the paper, GitHub repository, and project page.

It also includes a detailed introduction, information about the dataset structure, a sample usage section demonstrating how to download the data and evaluate a model, and the academic citation.

Files changed (1) hide show
  1. README.md +91 -3
README.md CHANGED
@@ -1,3 +1,91 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ task_categories:
4
+ - text-to-image
5
+ tags:
6
+ - graphics-programming
7
+ - svg
8
+ - benchmark
9
+ - llm-evaluation
10
+ ---
11
+
12
+ # SGP-GenBench: Symbolic Graphics Programming Benchmark
13
+
14
+ This repository contains the dataset for **SGP-GenBench**, a comprehensive benchmark introduced in the paper [Symbolic Graphics Programming with Large Language Models](https://huggingface.co/papers/2509.05208).
15
+
16
+ * **Paper**: [https://huggingface.co/papers/2509.05208](https://huggingface.co/papers/2509.05208)
17
+ * **Code**: [https://github.com/Sphere-AI-Lab/SGP-RL](https://github.com/Sphere-AI-Lab/SGP-RL)
18
+ * **Project Page**: [https://spherelab.ai/SGP-Gen/](https://spherelab.ai/SGP-Gen/)
19
+
20
+ ## Introduction
21
+
22
+ Large language models (LLMs) excel at program synthesis, yet their ability to produce symbolic graphics programs (SGPs) that render into precise visual content remains underexplored. This work studies symbolic graphics programming, where the goal is to generate an SGP from a natural-language description. This task also serves as a lens into how LLMs understand the visual world by prompting them to generate images rendered from SGPs. Among various SGPs, our paper sticks to scalable vector graphics (SVGs).
23
+
24
+ SGP-GenBench is designed to evaluate LLMs on symbolic graphics programming, covering object fidelity, scene fidelity, and compositionality (attribute binding, spatial relations, numeracy). The benchmark reveals notable shortcomings in current models and serves as a tool for improving LLMs' ability to generate SGPs, particularly through reinforcement learning with verifiable rewards.
25
+
26
+ ## Dataset Structure
27
+
28
+ SGP-GenBench consists of several datasets for training and evaluation:
29
+
30
+ * **Training Datasets**:
31
+ * **COCO 2017**: Used for images and captions.
32
+ * **`svg-gen-70k.jsonl`**: A specialized SVG training dataset.
33
+ * **Evaluation Datasets**:
34
+ * **SGP-Object**: For evaluating object-level performance.
35
+ * **SGP-CompBench**: For evaluating compositional capabilities.
36
+
37
+ ## Sample Usage
38
+
39
+ To get started with the SGP-GenBench dataset, follow these steps:
40
+
41
+ ### 1. Installation
42
+
43
+ First, set up the environment and install necessary packages as described in the GitHub repository:
44
+
45
+ ```bash
46
+ conda env create -n sgp_gen -f environment.yml
47
+ conda activate sgp_gen
48
+ pip install vllm==0.7.2 && pip install oat-llm==0.0.9
49
+ git clone [email protected]:Sphere-AI-Lab/SGP-RL.git
50
+ cd SGP-RL
51
+ pip install -e .
52
+ pip install cairosvg openai-clip lxml
53
+ ```
54
+
55
+ ### 2. Prepare Datasets
56
+
57
+ All required training and evaluation datasets can be automatically downloaded using the provided script:
58
+
59
+ ```bash
60
+ bash prepare_data.sh
61
+ ```
62
+
63
+ Alternatively, you can manually download `svg-gen-70k.jsonl` and `SGP-Object.json` from the Hugging Face Hub links mentioned in the GitHub README or this dataset card's files section.
64
+
65
+ ### 3. Evaluation on SGP-GenBench
66
+
67
+ After preparing the data and potentially training a model (see the GitHub repository for RL training instructions), you can evaluate a model's performance on SGP-GenBench:
68
+
69
+ ```bash
70
+ # Modify YOUR_MODEL_PATH to point to your trained model checkpoint
71
+ python evaluate_svg_model.py --model_path YOUR_MODEL_PATH
72
+ ```
73
+
74
+ The sampled responses will be stored in `./evaluation_results`. To print out important metrics like DINO-score, CLIP-score, and Diversity, use:
75
+
76
+ ```bash
77
+ bash print_results.sh
78
+ ```
79
+
80
+ ## Citation
81
+
82
+ If you use this dataset in your research, please cite the following paper:
83
+
84
+ ```bib
85
+ @article{chen2025symbolic,
86
+ title={Symbolic Graphics Programming with Large Language Models},
87
+ author={Yamei Chen and Haoquan Zhang and Yangyi Huang and Zeju Qiu and Kaipeng Zhang and Yandong Wen and Weiyang Liu},
88
+ journal={arXiv preprint arXiv:2509.05208},
89
+ year={2025}
90
+ }
91
+ ```