Improve model card for UniTok with metadata and usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +386 -3
README.md CHANGED
@@ -1,3 +1,386 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: any-to-any
4
+ library_name: transformers
5
+ ---
6
+
7
+ <div align="center">
8
+ <h1>UniTok: A Unified Tokenizer <br> for Visual Generation and Understanding</h1>
9
+
10
+ [**Chuofan Ma**](https://machuofan.github.io/)<sup>1,2</sup> · [**Yi Jiang**](https://enjoyyi.github.io/)<sup>2&dagger;</sup> · [**Junfeng Wu**](https://wjf5203.github.io/)<sup>2,3</sup> · [**Jihan Yang**](https://jihanyang.github.io/)<sup>1</sup>
11
+ <br>
12
+ [**Xin Yu**](https://xinyu-andy.github.io/)<sup>1</sup> · [**Zehuan Yuan**](https://shallowyuan.github.io/)<sup>2*</sup> · [**Bingyue Peng**](https://openreview.net/profile?id=~BINGYUE_PENG1)<sup>2</sup> · [**Xiaojuan Qi**](https://xjqi.github.io/)<sup>1&dagger;*</sup>
13
+
14
+ <sup>1</sup>HKU&emsp;&emsp;&emsp;<sup>2</sup>ByteDance&emsp;&emsp;&emsp;<sup>3</sup>HUST
15
+ <br>
16
+ &dagger;project lead&emsp;&emsp;&emsp;*corresponding author
17
+
18
+ <a href="https://huggingface.co/papers/2502.20321"><img src='https://img.shields.io/badge/Paper-UniTok-red' alt='Paper PDF'></a>
19
+ <a href="https://foundationvision.github.io/UniTok/"><img src='https://img.shields.io/badge/Project_Page-UniTok-green' alt='Project Page'></a>
20
+ <a href="https://github.com/foundationvision/unitok"><img src='https://img.shields.io/badge/GitHub-Code-blue'></a>
21
+ <a href="https://huggingface.co/FoundationVision/unitok_tokenizer"><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a>
22
+ <a href="https://huggingface.co/spaces/FoundationVision/UniTok"><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-yellow'></a>
23
+ </div>
24
+
25
+ This repository implements UniTok, a unified visual tokenizer well-suited for both generation and understanding tasks.
26
+ It is compatible with autoregressive generative models (e.g. LlamaGen),
27
+ multimodal understanding models (e.g. LLaVA), and unified MLLMs (e.g. Chameleon and Liquid).
28
+
29
+ ![teaser](https://github.com/FoundationVision/UniTok/raw/main/assets/teaser.png)
30
+
31
+ Built upon UniTok, we construct an MLLM capable of both multimodal generation and understanding
32
+ with the [Liquid](https://github.com/FoundationVision/Liquid/) framework,
33
+ which sets a new state-of-the-art among unified autoregressive MLLMs.
34
+
35
+ ![teaser](https://github.com/FoundationVision/UniTok/raw/main/assets/samples.png)
36
+
37
+ ## Abstract
38
+ Visual generative and understanding models typically rely on distinct tokenizers to process images, presenting a key challenge for unifying them within a single framework. Recent studies attempt to address this by connecting the training of VQVAE (for autoregressive generation) and CLIP (for understanding) to build a unified tokenizer. However, directly combining these training objectives has been observed to cause severe loss conflicts. In this paper, we show that reconstruction and semantic supervision do not inherently conflict. Instead, the underlying bottleneck stems from limited representational capacity of discrete token space. Building on these insights, we introduce UniTok, a unified tokenizer featuring a novel multi-codebook quantization mechanism that effectively scales up the vocabulary size and bottleneck dimension. In terms of final performance, UniTok sets a new record of 0.38 rFID and 78.6% zero-shot accuracy on ImageNet. Besides, UniTok can be seamlessly integrated into MLLMs to unlock native visual generation capability, without compromising the understanding performance. Additionally, we show that UniTok favors cfg-free generation, reducing gFID from 14.6 to 2.5 on ImageNet 256$\times$256 benchmark. GitHub: this https URL .
39
+
40
+ ## News
41
+ **2025-09-18:** UniTok is accepted at NeurIPS 2025 as a spotlight.
42
+
43
+ **2025-05-19:** We find UniTok favors generation **without classifier-free-guidance** --
44
+ it reduces gFID (without cfg) from 14.6 to 2.51 on ImageNet 256x256 using LlamaGen-XXL as the generator.
45
+ Please refer to the updated [EVAL.md](https://github.com/FoundationVision/UniTok/blob/main/eval/EVAL.md) for more details.
46
+
47
+ **2025-04-15:** The [gradio demo](https://huggingface.co/spaces/FoundationVision/UniTok) of UniTok MLLM is available on Huggingface now!
48
+
49
+ **2025-04-02:** A new [checkpoint](https://huggingface.co/FoundationVision/unitok_tokenizer/tree/main)
50
+ of UniTok is released, which has better downstream task performance
51
+ by replacing the causal attention projection layer with full attention.
52
+ The [model weights](https://huggingface.co/FoundationVision/unitok_mllm)
53
+ of our unified MLLM are also available on Huggingface now!
54
+
55
+ **2025-02-28:** Paper, code, model, and [project page](https://foundationvision.github.io/UniTok/) for UniTok are all released.
56
+
57
+ ## Performance
58
+
59
+ <table>
60
+ <thead>
61
+ <tr>
62
+ <th>Method</th>
63
+ <th>#Tokens</th>
64
+ <th>rFID &darr;</th>
65
+ <th>Accuracy</th>
66
+ </tr>
67
+ </thead>
68
+ <tbody>
69
+ <tr>
70
+ <td colspan="4"><i>VQVAE Model</i></td>
71
+ </tr>
72
+ <tr align="center">
73
+ <td>VQ-GAN</td>
74
+ <td>256</td>
75
+ <td>4.98</td>
76
+ <td>--</td>
77
+ </tr>
78
+ <tr align="center">
79
+ <td>RQ-VAE</td>
80
+ <td>256</td>
81
+ <td>1.30</td>
82
+ <td>--</td>
83
+ </tr>
84
+ <tr align="center">
85
+ <td>VAR</td>
86
+ <td>680</td>
87
+ <td>0.90</td>
88
+ <td>--</td>
89
+ </tr>
90
+ <tr>
91
+ <td colspan="4"><i>CLIP Model</i></td>
92
+ </tr>
93
+ <tr align="center">
94
+ <td>CLIP</td>
95
+ <td>256</td>
96
+ <td>--</td>
97
+ <td>76.2</td>
98
+ </tr>
99
+ <tr align="center">
100
+ <td>SigLIP</td>
101
+ <td>256</td>
102
+ <td>--</td>
103
+ <td>80.5</td>
104
+ </tr>
105
+ <tr align="center">
106
+ <td>ViTamin</td>
107
+ <td>256</td>
108
+ <td>--</td>
109
+ <td>81.2</td>
110
+ </tr>
111
+ <tr>
112
+ <td colspan="4"><i>Unified Model</i></td>
113
+ </tr>
114
+ <tr align="center">
115
+ <td>TokenFlow &dagger;</td>
116
+ <td>680</td>
117
+ <td>1.37</td>
118
+ <td>--</td>
119
+ </tr>
120
+ <tr align="center">
121
+ <td>VILA-U &dagger;</td>
122
+ <td>256</td>
123
+ <td>1.80</td>
124
+ <td>73.3</td>
125
+ </tr>
126
+ <tr align="center">
127
+ <td>UniTok</td>
128
+ <td>256</td>
129
+ <td>0.41</td>
130
+ <td>70.8</td>
131
+ </tr>
132
+ <tr align="center">
133
+ <td>UniTok &dagger;</td>
134
+ <td>256</td>
135
+ <td>0.38</td>
136
+ <td>78.6</td>
137
+ </tr>
138
+ </tbody>
139
+ </table>
140
+
141
+
142
+ &dagger; indicates the model uses pretrained CLIP weights for initialization. Although CLIP weight initialization boosts ImageNet zero-shot accuracy,
143
+ we notice that random initialization leads to better downstream understanding performance.
144
+ We thus release the model checkpoint of UniTok that is trained from scratch.
145
+
146
+ ## Model Weights
147
+
148
+ | Model | Res. | #Token | Code Shape | rFID | Checkpoint |
149
+ |:------------:|:----:|:------:|:-------------------------:|:----:|:------------:|
150
+ | UniTok-Large | 256 | 256 | 16 $\times$ 16 $\times$ 8 | 0.41 | [Download](https://huggingface.co/FoundationVision/unitok_tokenizer/blob/main/unitok_tokenizer.pth) |
151
+
152
+ ## Usage
153
+
154
+ ### Requirements
155
+ - Python ≥ 3.10
156
+ - PyTorch ≥ 2.3.1
157
+
158
+ ### Installation
159
+
160
+ ```bash
161
+ git clone https://github.com/FoundationVision/UniTok.git
162
+ cd UniTok
163
+ pip install -r requirements.txt
164
+ ```
165
+
166
+ ### Inference
167
+
168
+ Please download the [checkpoint](https://huggingface.co/FoundationVision/unitok_tokenizer) and fill in the `ckpt_path`.
169
+ ```bash
170
+ python inference.py \
171
+ --ckpt_path /path/to/unitok_tokenizer.pth \
172
+ --src_img /path/to/test_img --rec_img /path/to/rec_img
173
+ ```
174
+
175
+ ### Unified MLLM Inference
176
+
177
+ The simplest code for Lumina-mGPT inference:
178
+
179
+ ```python
180
+ from inference_solver import FlexARInferenceSolver
181
+ from PIL import Image
182
+
183
+ # ******************** Image Generation ********************
184
+ inference_solver = FlexARInferenceSolver(
185
+ model_path="Alpha-VLLM/Lumina-mGPT-7B-768",
186
+ precision="bf16",
187
+ target_size=768,
188
+ )
189
+
190
+ q1 = f"Generate an image of 768x768 according to the following prompt:
191
+ " \
192
+ f"Image of a dog playing water, and a waterfall is in the background."
193
+
194
+ # generated: tuple of (generated response, list of generated images)
195
+ generated = inference_solver.generate(
196
+ images=[],
197
+ qas=[[q1, None]],
198
+ max_gen_len=8192,
199
+ temperature=1.0,
200
+ logits_processor=inference_solver.create_logits_processor(cfg=4.0, image_top_k=2000),
201
+ )
202
+
203
+ a1, new_image = generated[0], generated[1][0]
204
+
205
+
206
+ # ******************* Image Understanding ******************
207
+ inference_solver = FlexARInferenceSolver(
208
+ model_path="Alpha-VLLM/Lumina-mGPT-7B-512",
209
+ precision="bf16",
210
+ target_size=512,
211
+ )
212
+
213
+ # "<|image|>" symbol will be replaced with sequence of image tokens before fed to LLM
214
+ q1 = "Describe the image in detail. <|image|>"
215
+
216
+ images = [Image.open("image.png")]
217
+ qas = [[q1, None]]
218
+
219
+ # `len(images)` should be equal to the number of appearance of "<|image|>" in qas
220
+ generated = inference_solver.generate(
221
+ images=images,
222
+ qas=qas,
223
+ max_gen_len=8192,
224
+ temperature=1.0,
225
+ logits_processor=inference_solver.create_logits_processor(cfg=4.0, image_top_k=2000),
226
+ )
227
+
228
+ a1 = generated[0]
229
+ # generated[1], namely the list of newly generated images, should typically be empty in this case.
230
+
231
+
232
+ # ********************* Omni-Potent *********************
233
+ inference_solver = FlexARInferenceSolver(
234
+ model_path="Alpha-VLLM/Lumina-mGPT-7B-768-Omni",
235
+ precision="bf16",
236
+ target_size=768,
237
+ )
238
+
239
+ # Example: Depth Estimation
240
+ # For more instructions, see demos/demo_image2image.py
241
+ q1 = "Depth estimation. <|image|>"
242
+ images = [Image.open("image.png")]
243
+ qas = [[q1, None]]
244
+
245
+ generated = inference_solver.generate(
246
+ images=images,
247
+ qas=qas,
248
+ max_gen_len=8192,
249
+ temperature=1.0,
250
+ logits_processor=inference_solver.create_logits_processor(cfg=1.0, image_top_k=200),
251
+ )
252
+
253
+ a1 = generated[0]
254
+ new_image = generated[1][0]
255
+ ```
256
+
257
+ ### Training
258
+
259
+ - We train UniTok on [DataComp-1B](https://github.com/mlfoundations/datacomp).
260
+ Please follow the [instructions](https://github.com/mlfoundations/datacomp?tab=readme-ov-file#downloading-datacomp-1b) to download and prepare the data.
261
+
262
+ - Download the [models](https://huggingface.co/FoundationVision/unitok_external) used for loss calculation and put them under `./external`.
263
+
264
+ - Download the [ImageNet validation set](https://www.image-net.org/) for zero-shot accuracy evaluation.
265
+
266
+ - Download the ImageNet 256$\times$256 [reference batch](https://huggingface.co/datasets/FoundationVision/imagenet_reference_batch) for FID evaluation.
267
+
268
+ Configure `nnodes, nproc_per_node, node_rank, master_addr, master_port` in `launch.sh` and run:
269
+
270
+ ```bash
271
+ bash launch.sh \
272
+ --output_dir '/path/to/save/checkpoints/' \
273
+ --train_data '/path/to/datacomp/shards/{00000000..00140146}.tar' \
274
+ --imagenet_val '/path/to/imagenet_val/' \
275
+ --fid_eval_src '/path/to/imagenet_reference_batch' \
276
+ --fid_eval_dst '/path/to/save/imagenet_reconstructed_batch'
277
+ ```
278
+ **Note:** For more hyper-parameter configurations, please check `utils/config.py`.
279
+
280
+ ### Unified MLLM
281
+ We show that UniTok significantly boosts the performance of unified MLLMs.
282
+
283
+ Visual Understanding Performance on VQA Benchmarks.
284
+
285
+ | Method | LLM | Res. | VQAv2 | GQA | TextVQA | POPE | MME | MM-Vet |
286
+ |:----------:|:--------------:|:-------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
287
+ | Show-o | Phi-1.5-1.3B | 256 | 59.3 | 48.7 | - | 73.8 | 948 | - |
288
+ | Liquid | Gemma-7B | 512 | 71.3 | 58.4 | 42.4 | 81.1 | 1119 | - |
289
+ | VILA-U | Llama-2-7B | 256 | 75.3 | 58.3 | 48.3 | 83.9 | 1336 | 27.7 |
290
+ | **UniTok** | **Llama-2-7B** | **256** | **76.8** | **61.1** | **51.6** | **83.2** | **1448** | **33.9** |
291
+
292
+ Visual Generation Performance on GenAI-Bench.
293
+
294
+ <table>
295
+ <thead>
296
+ <tr>
297
+ <th rowspan="2">Method</th>
298
+ <th rowspan="2">Type</th>
299
+ <th rowspan="2">Count</th>
300
+ <th rowspan="2">Differ</th>
301
+ <th rowspan="2">Compare</th>
302
+ <th colspan="2">Logical</th>
303
+ <th rowspan="2">Overall</th>
304
+ </tr>
305
+ <tr>
306
+ <th>Negate</th>
307
+ <th>Universal</th>
308
+ </tr>
309
+ </thead>
310
+ <tbody>
311
+ <tr align="center">
312
+ <td>Show-o</td>
313
+ <td>Discrete Diff.</td>
314
+ <td>0.70</td>
315
+ <td>0.62</td>
316
+ <td>0.71</td>
317
+ <td>0.51</td>
318
+ <td>0.65</td>
319
+ <td>0.60</td>
320
+ </tr>
321
+ <tr align="center">
322
+ <td>VILA-U</td>
323
+ <td>Autoregressive</td>
324
+ <td>0.70</td>
325
+ <td>0.71</td>
326
+ <td>0.74</td>
327
+ <td>0.53</td>
328
+ <td>0.66</td>
329
+ <td>0.64</td>
330
+ </tr>
331
+ <tr align="center">
332
+ <td>Liquid</td>
333
+ <td>Autoregressive</td>
334
+ <td>0.76</td>
335
+ <td>0.73</td>
336
+ <td>0.74</td>
337
+ <td>0.46</td>
338
+ <td>0.74</td>
339
+ <td>0.65</td>
340
+ </tr>
341
+ <tr align="center">
342
+ <th>UniTok</th>
343
+ <th>Autoregressive</th>
344
+ <th>0.76</th>
345
+ <th>0.79</th>
346
+ <th>0.74</th>
347
+ <th>0.46</th>
348
+ <th>0.73</th>
349
+ <th>0.67</th>
350
+ </tr>
351
+ </tbody>
352
+ </table>
353
+
354
+ Please refer to [EVAL.md](https://github.com/FoundationVision/UniTok/blob/main/eval/EVAL.md) for more details.
355
+
356
+ ### Evaluation
357
+
358
+ We also benchmark UniTok in terms of both understanding performance using the [LLaVA](https://github.com/haotian-liu/LLaVA) framework
359
+ and generation performance using the [LLamaGen](https://github.com/FoundationVision/LlamaGen) framework.
360
+ Please refer to [EVAL.md](https://github.com/FoundationVision/UniTok/blob/main/eval/EVAL.md) for more details.
361
+
362
+ ## Acknowledgement
363
+ UniTok is built upon the awesome works
364
+ [VAR](https://github.com/FoundationVision/VAR),
365
+ [DataComp](https://github.com/mlfoundations/datacomp),
366
+ [Liquid](https://github.com/FoundationVision/Liquid/),
367
+ [LLaVA](https://github.com/haotian-liu/LLaVA/),
368
+ [LlamaGen](https://github.com/FoundationVision/LlamaGen/),
369
+ and [ViTamin](https://github.com/Beckschen/ViTamin).
370
+
371
+ ## License
372
+
373
+ This project is licensed under the MIT License. See the [LICENSE](https://github.com/FoundationVision/UniTok/blob/main/LICENSE) file for details.
374
+
375
+ ## Citation
376
+
377
+ If you find this project useful, please consider citing:
378
+
379
+ ```bibtex
380
+ @article{unitok,
381
+ title={UniTok: A Unified Tokenizer for Visual Generation and Understanding},
382
+ author={Ma, Chuofan and Jiang, Yi and Wu, Junfeng and Yang, Jihan and Yu, Xin and Yuan, Zehuan and Peng, Bingyue and Qi, Xiaojuan},
383
+ journal={arXiv preprint arXiv:2502.20321},
384
+ year={2025}
385
+ }
386
+ ```