File size: 3,008 Bytes
fff425c 1f7d809 fff425c ef839d8 fff425c 1f7d809 fff425c f6db081 fff425c 1472efd fff425c 1472efd fff425c 1472efd fff425c 1472efd fff425c 5825a7f fff425c ef839d8 fff425c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
---
license: mit
language:
- zh
- en
base_model:
- zai-org/GLM-4.1V-9B-Base
pipeline_tag: image-text-to-text
library_name: transformers
---
<h1>UI2Code^N: A Visual Language Model for Test-Time Scalable Interactive UI-to-Code Generation</h1>
- **Repository:** https://github.com/zai-org/UI2Code_N
- **Paper:** https://arxiv.org/abs/2511.08195
<p align="center">
<img src="https://raw.githubusercontent.com/zheny2751-dotcom/UI2Code-N/main/assets/fig1.png" alt="abs" style="width:90%;" />
</p>
**UI2Code^N** is a visual language foundation model trained through staged **pretraining**, **fine-tuning**, and **reinforcement learning** to achieve foundational improvements in multimodal coding, which unifies three key capabilities: **UI-to-code generation**, **UI editing**, and **UI polishing**.
Instead of relying on single-turn paradigms that make little use of iterative visual feedback, UI2Code^N introduces an interactive UI-to-code framework that more accurately reflects real-world workflows and raises the upper bound of achievable performance.
### Backbone Model
Our model is built on [GLM-4.1V-9B-Base](https://huggingface.co/zai-org/GLM-4.1V-9B-Base).
### Quick Inference
This is a simple example of running single-image inference using the `transformers` library.
First, install the `transformers` library:
```
pip install transformers>=4.57.1
```
Then, run the following code:
```python
from transformers import AutoProcessor, AutoModelForImageTextToText
import torch
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://raw.githubusercontent.com/zheny2751-dotcom/UI2Code-N/main/assets/example.png"
},
{
"type": "text",
"text": "Please generate the corresponding html code for the given UI screenshot."
}
],
}
]
processor = AutoProcessor.from_pretrained("zai-org/UI2Code_N")
model = AutoModelForImageTextToText.from_pretrained(
pretrained_model_name_or_path="zai-org/UI2Code_N",
torch_dtype=torch.bfloat16,
device_map="auto",
)
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt"
).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=16384)
output_text = processor.decode(generated_ids[0][inputs["input_ids"].shape[1]:], skip_special_tokens=False)
print(output_text)
```
See our [Github Repo](https://github.com/zai-org/UI2Code_N) for more detailed usage.
## Citation
If you find our model useful in your work, please cite it with:
```
@article{ui2coden2025,
title = {UI2Code$^{N}$: A Visual Language Model for Test-Time Scalable Interactive UI-to-Code Generation},
author = {Yang, Zhen and Hong, Wenyi and Xu, Mingde and Fan, Xinyue and Wang, Weihan and Gu, Xiaotao and Tang, Jie},
journal = {arXiv preprint arXiv:2511.08195},
year = {2025}
}
```
|