Improve model card: Add license and expand description

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +11 -4
README.md CHANGED
@@ -1,12 +1,19 @@
1
  ---
2
- library_name: transformers
3
- pipeline_tag: image-text-to-text
4
  base_model:
5
  - lmms-lab/llava-onevision-qwen2-7b-ov
 
 
 
6
  ---
7
 
8
- This is the output reward model (ORM) used in [T2I-R1](https://github.com/CaraJ7/T2I-R1).
 
 
 
 
 
 
9
 
10
  This model is fine-tuned from [lmms-lab/llava-onevision-qwen2-7b-ov](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov).
11
 
12
- Please check our paper: "[T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT](https://arxiv.org/pdf/2505.00703)" and [GitHub](https://github.com/CaraJ7/T2I-R1) for more information.
 
1
  ---
 
 
2
  base_model:
3
  - lmms-lab/llava-onevision-qwen2-7b-ov
4
+ library_name: transformers
5
+ pipeline_tag: image-text-to-text
6
+ license: cc-by-nc-4.0
7
  ---
8
 
9
+ This is the **Output Reward Model (ORM)** used in the paper [T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT](https://arxiv.org/pdf/2505.00703).
10
+
11
+ T2I-R1 is a novel reasoning-enhanced text-to-image generation model powered by Reinforcement Learning (RL) with a bi-level Chain-of-Thought (CoT) reasoning process. This ORM is crucial for evaluating image generation by leveraging two levels of CoT:
12
+ 1. **Semantic-level CoT**: for high-level planning of the prompt.
13
+ 2. **Token-level CoT**: for low-level pixel processing during patch-by-patch generation.
14
+
15
+ The paper introduces BiCoT-GRPO with an ensemble of generation rewards, which seamlessly optimizes both generation CoTs within the same training step. By applying these reasoning strategies to the baseline model, Janus-Pro, T2I-R1 achieves superior performance with a 13% improvement on T2I-CompBench and 19% improvement on the WISE benchmark, even surpassing the state-of-the-art model FLUX.1.
16
 
17
  This model is fine-tuned from [lmms-lab/llava-onevision-qwen2-7b-ov](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov).
18
 
19
+ For more details, please refer to the [official paper](https://arxiv.org/pdf/2505.00703) and the [GitHub repository](https://github.com/CaraJ7/T2I-R1).