File size: 8,731 Bytes
4a9fbd1
 
 
b5c083b
3a4bf96
 
 
 
 
 
 
 
 
4a9fbd1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3d9799
4a9fbd1
 
 
 
 
4804353
 
3a4bf96
4804353
3a4bf96
4804353
3a4bf96
 
 
 
 
4804353
3a4bf96
4804353
3a4bf96
4804353
3a4bf96
4804353
3a4bf96
4804353
 
3a4bf96
4804353
 
3a4bf96
4804353
3a4bf96
4804353
3a4bf96
 
 
 
4804353
3a4bf96
 
 
 
4804353
3a4bf96
4804353
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3a4bf96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4804353
 
3a4bf96
4804353
 
 
 
 
 
3a4bf96
 
4804353
 
 
 
 
 
 
 
 
 
 
 
 
 
4a9fbd1
3a4bf96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
---
license: other
license_name: multiid-2m
license_link: LICENSE.md
language:
- en
task_categories:
- text-to-image
tags:
- face-generation
- identity-consistent
- multi-person
- benchmark
dataset_info:
  features:
  - name: ID
    dtype: string
  - name: GT
    dtype: image
  - name: input_images
    sequence: image
  - name: prompt
    dtype: string
  - name: task_type
    dtype: string
  - name: bboxes
    dtype: string
  - name: subset
    dtype: string
  - name: num_persons
    dtype: int32
  splits:
  - name: train
    num_bytes: 194376894.0
    num_examples: 433
  download_size: 183022320
  dataset_size: 194376894.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# MultiID-Bench in WithAnyone

[![arXiv](https://img.shields.io/badge/arXiv-2510.14975-b31b1b.svg)](https://arxiv.org/abs/2510.14975)
[![Project Page](https://img.shields.io/badge/Project-Page-blue.svg)](https://doby-xu.github.io/WithAnyone/)
[![Code](https://img.shields.io/badge/GitHub-Code-blue.svg)](https://github.com/Doby-Xu/WithAnyone)
[![HuggingFace](https://img.shields.io/badge/HuggingFace-Model-yellow.svg)](https://huggingface.co/WithAnyone/WithAnyone)
[![MultiID-Bench](https://img.shields.io/badge/MultiID-Bench-Green.svg)](https://huggingface.co/datasets/WithAnyone/MultiID-Bench)
[![MultiID-2M](https://img.shields.io/badge/MultiID_2M-Dataset-Green.svg)](https://huggingface.co/datasets/WithAnyone/MultiID-2M)
[![Demo](https://img.shields.io/badge/HuggingFace-Demo-Yellow.svg)](https://huggingface.co/spaces/WithAnyone/WithAnyone_demo)

The MultiID-Bench dataset is a benchmark introduced in the paper [WithAnyone: Towards Controllable and ID Consistent Image Generation](https://huggingface.co/papers/2510.14975). It is specifically tailored for multi-person scenarios in text-to-image research, providing diverse references for each identity. This dataset aims to quantify "copy-paste" artifacts and evaluate the trade-off between identity fidelity and variation, enabling models like WithAnyone to achieve controllable and identity-consistent image generation.

## Links

*   **Paper:** [WithAnyone: Towards Controllable and ID Consistent Image Generation](https://huggingface.co/papers/2510.14975)
*   **Project Page:** https://doby-xu.github.io/WithAnyone/
*   **GitHub Repository:** https://github.com/Doby-Xu/WithAnyone
*   **WithAnyone Model:** https://huggingface.co/WithAnyone/WithAnyone
*   **WithAnyone Demo:** https://huggingface.co/spaces/WithAnyone/WithAnyone_demo

## Sample Usage

This section provides instructions for downloading the MultiID-Bench dataset and preparing it for evaluation.

### Download the Dataset

You can download the MultiID-Bench dataset using the Hugging Face CLI:

```bash
huggingface-cli download WithAnyone/MultiID-Bench --repo-type dataset --local-dir <path to MultiID-Bench directory>
```

### Prepare Data for Evaluation

After downloading, if your dataset is in a `parquet` file format, you can convert it into a structured directory of images and JSON metadata using the `parquet2bench.py` script provided in the GitHub repository.

First, ensure you have cloned the GitHub repository:
```bash
git clone https://github.com/Doby-Xu/WithAnyone
cd WithAnyone
```

Then, convert the downloaded parquet file:
```bash
python MultiID_Bench/parquet2bench.py --parquet <path to downloaded parquet file> --output_dir <root directory to save the processed data>
```
The output directory will contain a structure like this, with subfolders for each ID and `meta.json` files containing prompts:
```
root/
β”œβ”€β”€ id1/
β”‚   β”œβ”€β”€ out.jpg
β”‚   β”œβ”€β”€ ori.jpg
β”‚   β”œβ”€β”€ ref_1.jpg
β”‚   β”œβ”€β”€ ref_2.jpg
β”‚   β”œβ”€β”€ ref_3.jpg
β”‚   β”œβ”€β”€ ref_4.jpg
β”‚   └── meta.json
β”‚
β”œβ”€β”€ id2/
β”‚   β”œβ”€β”€ out.jpg
β”‚   β”œβ”€β”€ ori.jpg
β”‚   β”œβ”€β”€ ref_1.jpg
β”‚   β”œβ”€β”€ ref_2.jpg
β”‚   β”œβ”€β”€ ref_3.jpg
β”‚   β”œβ”€β”€ ref_4.jpg
β”‚   └── meta.json
β”‚
└── ...
```
The `meta.json` file should contain the prompt used to generate the image, in the following format:
```json
{
    "prompt": "a photo of a person with blue hair and glasses"
}
```

### Environment Setup for Evaluation

To run the evaluation scripts, you need to install several packages. Besides the `requirements.txt` from the [GitHub repo](https://github.com/Doby-Xu/WithAnyone), install the following:

```bash
pip install aesthetic-predictor-v2-5 
pip install facexlib
pip install colorama
pip install pytorch_lightning
git clone https://github.com/timesler/facenet-pytorch.git facenet_pytorch

# in MultiID_Bench/
mkdir pretrained
```

You will also need the following models to run the evaluation: CLIP, arcface, aesthetic-v2.5, adaface, and facenet. The first three will be automatically downloaded. For `adaface`, download `adaface_ir50_ms1mv2.ckpt` from [this link](https://drive.google.com/file/d/1eUaSHG4pGlIZK7hBkqjyp2fc2epKoBvI/view?usp=sharing) and place it in the `pretrained` directory.

### Run Evaluation

You can run the evaluation script as follows, using the prepared data:

```python
from eval import BenchEval_Geo

def run():
    evaler = BenchEval_Geo(
        target_dir="<root directory mentioned above>",
        output_dir="<output directory to save the evaluation results>",
        ori_file_name="ori.jpg", # the name of the ground truth image file
        output_file_name="out.jpg", # the name of the generated image file
        ref_1_file_name="ref_1.jpg", # the name of the first reference image file
        ref_2_file_name="ref_2.jpg", # the name of the second reference image file
        # ref_2_file_name=None, # if you only have one reference image, set ref_2_file_name to None
        # ref_3_file_name="ref_3.jpg", # the name of the third reference
        # ref_4_file_name="ref_4.jpg", # the name of the fourth reference,
        caption_keyword="prompt", # the keyword to extract the prompt from meta.json
        names_keyword=None
    )
    evaler()
if __name__ == "__main__":
    run()
```

## License and Disclaimer

The **code** of WithAnyone is released under the [**Apache License 2.0**](https://www.apache.org/licenses/LICENSE-2.0), while the WithAnyone **model and associated datasets** are made available **solely for non-commercial academic research purposes**.

-   **License Terms:**  
    The WithAnyone model is distributed under the [**FLUX.1 [dev] Non-Commercial License v1.1.1**](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). All underlying base models remain governed by their respective original licenses and terms, which shall continue to apply in full. Users must comply with all such applicable licenses when using this project.

-   **Permitted Use:**  
    This project may be used for lawful academic research, analysis, and non-commercial experimentation only. Any form of commercial use, redistribution for profit, or application that violates applicable laws, regulations, or ethical standards is strictly prohibited.

-   **User Obligations:**  
    Users are solely responsible for ensuring that their use of the model and dataset complies with all relevant laws, regulations, institutional review policies, and third-party license terms.

-   **Disclaimer of Liability:**  
    The authors, developers, and contributors make no warranties, express or implied, regarding the accuracy, reliability, or fitness of this project for any particular purpose. They shall not be held liable for any damages, losses, or legal claims arising from the use or misuse of this project, including but not limited to violations of law or ethical standards by end users.

-   **Acceptance of Terms:**  
    By downloading, accessing, or using this project, you acknowledge and agree to be bound by the applicable license terms and legal requirements, and you assume full responsibility for all consequences resulting from your use.

## Acknowledgement
We thank the following prior art for their excellent open source work: 
- [PuLID](https://github.com/ToTheBeginning/PuLID)
- [UNO](https://github.com/bytedance/UNO)
- [UniPortrait](https://github.com/junjiehe96/UniPortrait)
- [InfiniteYou](https://github.com/bytedance/InfiniteYou)
- [DreamO](https://github.com/bytedance/DreamO)
- [UMO](https://github.com/bytedance/UMO)

## Citation

If you find this project useful in your research, please consider citing:

```bibtex
@article{xu2025withanyone,
  title={WithAnyone: Towards Controllable and ID-Consistent Image Generation}, 
  author={Hengyuan Xu and Wei Cheng and Peng Xing and Yixiao Fang and Shuhan Wu and Rui Wang and Xianfang Zeng and Gang Yu and Xinjun Ma and Yu-Gang Jiang},
  journal={arXiv preprint arxiv:2510.14975},
  year={2025}
}
```