Scone: Bridging Composition and Distinction in Subject-Driven Image Generation
via Unified Understanding-Generation Modeling
Yuran Wang1,2* Bohan Zeng1,2* Chengzhuo Tong1,2 Wenxuan Liu1 Yang Shi1,2
Xiaochen Ma1 Hao Liang1 Yuanxing Zhang2 Wentao Zhang1β
1Peking University 2Kling Team, Kuaishou Technology
* Equal contribution, β Corresponding author
π’ News
- 2025.12.16: The paper, training code, inference and evaluation code, model weight, training data, SconeEval benchmark are now released.
π Introduction
Subject-driven image generation has recently gained significant attention, with the focus evolving from single-subject to multi-subject generation, incorporating more input images. Existing methods can process two or more input images and combine subjects based on instructions, showcasing potential for more complex composition tasks.
However, existing works primarily focus on expanding subject combinations while neglecting the ability to distinguish target subjects in complex contexts. As shown in Figure 1.(a), although current models can combine multiple subjects, they may fail to identify and generate the correct target subject when a reference image contains multiple candidates, leading to problems such as subject omissions (none of the candidate subjects appear) or errors (misidentification of the target subject). Real-world images often involve interference and intricate details, which further limit practical performance. Thus, we emphasize examining the input subjects themselves, focusing on the modelβs ability to distinguish the target subject within complex contexts and leverage this information for generation.
- We propose the Scone (Subject-driven composition and distinction enhancement) model, which supports multi-subject composition and excels in subject distinction in complex contexts. Experiments show our Scone ranks first among open-source models on OmniContext benchmark.
- We introduce the understanding bridge strategy, which transforms the understanding expert into a semantic bridge, enabling early multimodal alignment and attention-based semantic filtering to guide the generation expert, enhancing subject distinction and semantic fidelity without adding extra parameters.
- We develop SconeEval, a challenging benchmark with three difficulty levels, to evaluate performance on subject-driven image generation tasks from both composition and distinction perspectives.
π§ Environment setup
git clone https://github.com/Ryann-Ran/Scone.git
cd Scone
conda create -n scone python=3.10 -y
conda activate scone
pip install -r requirements.txt
pip install flash_attn==2.5.8 --no-build-isolation
π₯ Train
Data and base model preparation
Download our 22K refined single-candidate data and 35K multi-candidate data from Scone-S2I-57K. The 70K base single-canididate data are sampled from open-source datasets like X2I, MUSAR-Gen, UNO-1M, and Echo-4o-Image. Please refer to the dataset links for more details.
cd Scone # pip install -U huggingface_hub hf download Ryann829/Scone-S2I-57K --repo-type=dataset --local-dir ./datasets/Scone-S2I-57KOrganize the data hierarchy as follows:
Scone-S2I-57K
βββ parquet_data
β βββ scone_single_candidate_base/
β βββ scone_single_candidate_refined/
β βββ scone_multi_candidate/
βββ parquet_info
βββ scone_single_candidate_base.json
βββ scone_single_candidate_refined.json
βββ scone_multi_candidate.json
Replace each
your_data_pathplaceholder with your actual absolute path in:Parquet information files:
./datasets/Scone-S2I-57K/parquet_info/*.jsonDataset information file:
./data/dataset_info.py
Download the checkpoint of our base model BAGEL from HuggingFace:
cd Scone
# pip install -U huggingface_hub
hf download ByteDance-Seed/BAGEL-7B-MoT --local-dir ./ckpts/BAGEL-7B-MoT
- Note: To avoid out-of-memory (OOM) issues, we disable the EMA update strategy originally used in BAGEL. All our training processes are conducted on 8 Nvidia A800 GPUs.
- The usage of semantic mask in the understanding bridge strategy is controlled by the training argument
--use_semantic_mask.
Stage I: Composition training
For Step 1, please use base single-candidate data for 1 epoch (~30 hours):
bash scripts/train_stage1_step1.sh # π₯ Und., Gen.
For Step 2, please use refined single-candidate data for 1 epoch (~15 hours) and replace model_path in the script with your Step 1 checkpoint :
bash scripts/train_stage1_step2.sh # π₯ Und., Gen.
Stage II: Distinction training with understanding bridge strategy
For Step 1, please use refined sinlgle-candidate data and multi-candidate data for 1k steps (~5 hours) and replace model_path in the script with your Stage 1 Step 2 checkpoint:
bash scripts/train_stage2_step1.sh # π₯ Und. βοΈ Gen.
For Step 2, please use refined sinlgle-candidate data and multi-candidate data for 1k steps (~5 hours) and replace model_path in the script with your Stage 2 Step 1 checkpoint:
bash scripts/train_stage2_step2.sh # π₯ Und., Gen.
π Inference and Evaluation
Scone model preparation
Download the Scone model checkpoint from HuggingFace:
# pip install -U huggingface_hub
hf download Ryann829/Scone --local-dir ./ckpts/Scone
Single case inference
Run the inference script:
bash scripts/inference_single_case.sh
Example Output: (Images sampled at 1024x1024 resolution with seed 1234, except for GPT-4o and Gemini-2.5-Flash-Image APIs)
| Ref. 1 | Ref. 2 | Instruction | Scone (Ours) | GPT-4o | Gemini-2.5-Flash-Image | UNO | Qwen-Image-Edit-2509 | BAGEL | OmniGen2 | Echo-4o |
|---|---|---|---|---|---|---|---|---|---|---|
![]() |
![]() |
The man from image 2 holds the object which has a blue-and-red top in image 1 in a coffee shop. | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
π Performance
OmniContext benchmark
| Method | SINGLE β | MULTIPLE β | SCENE β | Average β | |||||
|---|---|---|---|---|---|---|---|---|---|
| Character | Object | Character | Object | Char. + Obj. | Character | Object | Char. + Obj. | ||
| Closed-Source Model | |||||||||
| Gemini-2.5-Flash-Image | 8.79 | 9.12 | 8.27 | 8.60 | 7.71 | 7.63 | 7.65 | 6.81 | 8.07 |
| GPT-4o* | 8.96 | 8.91 | 8.90 | 8.95 | 8.81 | 8.92 | 8.40 | 8.44 | 8.78 |
| Generation Model | |||||||||
| FLUX.1 Kontext [dev] | 8.07 | 7.97 | - | - | - | - | - | - | - |
| UNO | 7.15 | 6.72 | 3.56 | 6.46 | 4.90 | 2.72 | 4.89 | 4.76 | 5.14 |
| USO | 8.03 | 7.55 | 3.32 | 6.10 | 4.56 | 2.77 | 5.38 | 5.09 | 5.35 |
| UniWorld-V2 | 8.45 | 8.44 | 7.87 | 8.22 | 7.95 | 5.36 | 7.47 | 6.98 | 7.59 |
| Qwen-Image-Edit-2509 | 8.56 | 8.41 | 7.92 | 8.37 | 7.79 | 5.23 | 7.70 | 6.86 | 7.60 |
| Unified Model | |||||||||
| BAGEL | 7.00 | 7.04 | 5.32 | 6.69 | 6.74 | 3.94 | 5.77 | 5.73 | 6.03 |
| OmniGen2 | 8.17 | 7.63 | 7.26 | 7.03 | 7.56 | 7.02 | 6.90 | 6.64 | 7.28 |
| Echo-4o | 8.34 | 8.27 | 8.13 | 8.14 | 8.11 | 7.07 | 7.73 | 7.77 | 7.95 |
| Score (Ours) | 8.34 | 8.52 | 8.24 | 8.14 | 8.30 | 7.06 | 7.88 | 7.63 | 8.01 |
- *: GPT-4o responded to 365~370 test cases out of the total 409 cases due to OpenAI safety restrictions.
- To mitigate randomness, we perform 3 rounds of sampling at 1024x1024 resolution, scoring 3 times per round, yielding 9 group results. The final score is the average of these results.
SconeEval benchmark
| Method | Composition β | Distinction β | Distinction & Composition β | Average β | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Single | Multi | Cross | Intra | Cross | Intra | COM | DIS | Overall | |||||
| COM | COM | COM | DIS | COM | DIS | COM | DIS | COM | DIS | ||||
| Closed-Source Model | |||||||||||||
| Gemini-2.5-Flash-Image | 8.87 | 7.94 | 9.12 | 9.15 | 9.00 | 8.50 | 8.27 | 8.87 | 8.17 | 8.85 | 8.56 | 8.84 | 8.70 |
| GPT-4o* | 8.92 | 8.51 | 9.18 | 8.55 | 9.45 | 9.01 | 8.83 | 8.49 | 8.99 | 9.56 | 8.98 | 8.90 | 8.94 |
| Generation Model | |||||||||||||
| FLUX.1 Kontext [dev] | 7.92 | - | 7.93 | 8.45 | 6.20 | 6.11 | - | - | - | - | - | - | - |
| USO | 8.03 | 5.19 | 7.96 | 8.50 | 7.14 | 6.51 | 5.10 | 6.25 | 5.07 | 5.57 | 6.41 | 6.71 | 6.56 |
| UNO | 7.53 | 5.38 | 7.27 | 7.90 | 6.76 | 6.53 | 5.27 | 7.02 | 5.61 | 6.27 | 6.31 | 6.93 | 6.62 |
| UniWorld-V2 (Edit-R1-Qwen-Image-Edit-2509) |
8.41 | 7.16 | 8.63 | 8.24 | 7.44 | 6.77 | 7.52 | 8.03 | 7.70 | 7.24 | 7.81 | 7.57 | 7.69 |
| Qwen-Image-Edit-2509 | 8.54 | 6.85 | 8.85 | 8.57 | 7.32 | 6.86 | 7.53 | 8.13 | 7.49 | 7.02 | 7.76 | 7.65 | 7.70 |
| Unified Model | |||||||||||||
| BAGEL | 7.14 | 5.55 | 7.49 | 7.95 | 6.93 | 6.21 | 6.44 | 7.38 | 6.87 | 7.27 | 6.74 | 7.20 | 6.97 |
| OmniGen2 | 8.00 | 6.59 | 8.31 | 8.99 | 6.99 | 6.80 | 7.28 | 8.30 | 7.14 | 7.13 | 7.39 | 7.81 | 7.60 |
| Echo-4o | 8.58 | 7.73 | 8.36 | 8.33 | 7.74 | 7.18 | 7.87 | 8.72 | 8.01 | 8.33 | 8.05 | 8.14 | 8.09 |
| Scone (Ours) | 8.52 | 7.40 | 8.98 | 9.73 | 7.97 | 7.74 | 8.20 | 9.25 | 8.21 | 8.44 | 8.21 | 8.79 | 8.50 |
- *: GPT-4o responded to 365~370 test cases out of the total 409 cases due to OpenAI safety restrictions.
- To mitigate randomness, we perform 3 rounds of sampling at 1024x1024 resolution, scoring 3 times per round, yielding 9 group results. The final score is the average of these results.
SconeEval benchmark
To evaluate a modelβs ability to distinguish and generate the referred subject in complex visual contexts, we introduce a new benchmark, SconeEval. It contains 409 test cases across character, object, and scene combinations and subject distinction, with 19 case types in Figure 2(a) and 6 subtasks in Figure 2(b), providing a comprehensive evaluation of a modelβs ability to distinguish and utilize subject features.
Unlike traditional benchmarks that emphasize visual fidelity or text alignment, SconeEval focuses on cross-modal reasoning from complex contexts involving reference images and instructions, which requires deciding whom to generate when multiple candidates appear within or across images.
SconeEval includes three progressively challenging tasks, as shown in Figure 2(c): composition, distinction, and distinction & composition. In the composition task, each reference image contains a subject, and one or more images correspond to single or multiple generated subjects. In the distinction task, each reference image contains multiple subjects, and the model generates one target subject. The distinction & composition task integrates both settings, where each reference image contains multiple subjects and multiple images are used for multi-subject generation. Tasks involving distinction include cross-category and intra-category cases, indicating whether candidate subjects in a reference image belong to the same category.
π LeaderBoard
| Method | Composition β | Distinction β | Distinction & Composition β | Average β | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Single | Multi | Cross | Intra | Cross | Intra | COM | DIS | Overall | |||||
| COM | COM | COM | DIS | COM | DIS | COM | DIS | COM | DIS | ||||
| Closed-Source Model | |||||||||||||
| Gemini-2.5-Flash-Image | 8.87 | 7.94 | 9.12 | 9.15 | 9.00 | 8.50 | 8.27 | 8.87 | 8.17 | 8.85 | 8.56 | 8.84 | 8.70 |
| GPT-4o* | 8.92 | 8.51 | 9.18 | 8.55 | 9.45 | 9.01 | 8.83 | 8.49 | 8.99 | 9.56 | 8.98 | 8.90 | 8.94 |
| Generation Model | |||||||||||||
| FLUX.1 Kontext [dev] | 7.92 | - | 7.93 | 8.45 | 6.20 | 6.11 | - | - | - | - | - | - | - |
| USO | 8.03 | 5.19 | 7.96 | 8.50 | 7.14 | 6.51 | 5.10 | 6.25 | 5.07 | 5.57 | 6.41 | 6.71 | 6.56 |
| UNO | 7.53 | 5.38 | 7.27 | 7.90 | 6.76 | 6.53 | 5.27 | 7.02 | 5.61 | 6.27 | 6.31 | 6.93 | 6.62 |
| UniWorld-V2 (Edit-R1-Qwen-Image-Edit-2509) |
8.41 | 7.16 | 8.63 | 8.24 | 7.44 | 6.77 | 7.52 | 8.03 | 7.70 | 7.24 | 7.81 | 7.57 | 7.69 |
| Qwen-Image-Edit-2509 | 8.54 | 6.85 | 8.85 | 8.57 | 7.32 | 6.86 | 7.53 | 8.13 | 7.49 | 7.02 | 7.76 | 7.65 | 7.70 |
| Unified Model | |||||||||||||
| BAGEL | 7.14 | 5.55 | 7.49 | 7.95 | 6.93 | 6.21 | 6.44 | 7.38 | 6.87 | 7.27 | 6.74 | 7.20 | 6.97 |
| OmniGen2 | 8.00 | 6.59 | 8.31 | 8.99 | 6.99 | 6.80 | 7.28 | 8.30 | 7.14 | 7.13 | 7.39 | 7.81 | 7.60 |
| Echo-4o | 8.58 | 7.73 | 8.36 | 8.33 | 7.74 | 7.18 | 7.87 | 8.72 | 8.01 | 8.33 | 8.05 | 8.14 | 8.09 |
| Scone (Ours) | 8.52 | 7.40 | 8.98 | 9.73 | 7.97 | 7.74 | 8.20 | 9.25 | 8.21 | 8.44 | 8.21 | 8.79 | 8.50 |
- *: GPT-4o responded to 365~370 test cases out of the total 409 cases due to OpenAI safety restrictions.
- To mitigate randomness, we perform 3 rounds of sampling at 1024x1024 resolution, scoring 3 times per round, yielding 9 group results. The final score is the average of these results.
Inference
Download the data:
# pip install -U huggingface_hub
hf download Ryann829/SconeEval --repo-type=dataset --local-dir ../SconeEval
Run the script:
bash scripts/inference_sconeeval.sh
Evaluation
Use GPT-4.1 to evaluate the quality of the generated images and calculate the final score. Please ensure your API key is configured before running the script.
bash eval/s2i/sconeeval/eval.sh
π Updates
- Release paper
- Release training code
- Release inference and evaluation code
- Release model weight
- Release training data
- Release SconeEval benchmark
π° Citation
If you find Scone helpful, please consider giving the repo a star β.
If you find this project useful for your research, please consider citing our paper:
@misc{wang2025sconebridgingcompositiondistinction,
title={Scone: Bridging Composition and Distinction in Subject-Driven Image Generation via Unified Understanding-Generation Modeling},
author={Yuran Wang and Bohan Zeng and Chengzhuo Tong and Wenxuan Liu and Yang Shi and Xiaochen Ma and Hao Liang and Yuanxing Zhang and Wentao Zhang},
year={2025},
eprint={2512.12675},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.12675},
}
πͺ Acknowledgements
This project builds upon the following repositories:
Special thanks to these original projects and open-source datasets for their valuable contributions.
- Downloads last month
- 38









