File size: 2,494 Bytes
d91410a
 
 
 
 
 
 
 
 
71e298d
d91410a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71e298d
d91410a
 
 
 
 
 
71e298d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
license: apache-2.0
---
# Qwen-Image Image Structure Control Model - Depth ControlNet

![](./assets/cover.png)

## Model Introduction

This model is an image structure control model based on [Qwen-Image](https://www.modelscope.cn/models/Qwen/Qwen-Image), with a ControlNet architecture that enables structural control of generated images using depth maps. The training framework is built upon [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio), and the dataset used for training is [BLIP3o](https://modelscope.cn/datasets/BLIP3o/BLIP3o-60k).

## Result Demonstration

|Depth Map|Generated Image 1|Generated Image 2|
|-|-|-|
|![](./assets/depth2.jpg)|![](./assets/image2_0.jpg)|![](./assets/image2_1.jpg)|
|![](./assets/depth3.jpg)|![](./assets/image3_0.jpg)|![](./assets/image3_1.jpg)|
|![](./assets/depth1.jpg)|![](./assets/image1_0.jpg)|![](./assets/image1_1.jpg)|

## Inference Code
```
git clone https://github.com/modelscope/DiffSynth-Studio.git  
cd DiffSynth-Studio
pip install -e .
```

```python
from diffsynth.pipelines.qwen_image import QwenImagePipeline, ModelConfig, ControlNetInput
from PIL import Image
import torch
from modelscope import dataset_snapshot_download


pipe = QwenImagePipeline.from_pretrained(
    torch_dtype=torch.bfloat16,
    device="cuda",
    model_configs=[
        ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="transformer/diffusion_pytorch_model*.safetensors"),
        ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="text_encoder/model*.safetensors"),
        ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="vae/diffusion_pytorch_model.safetensors"),
        ModelConfig(model_id="DiffSynth-Studio/Qwen-Image-Blockwise-ControlNet-Depth", origin_file_pattern="model.safetensors"),
    ],
    tokenizer_config=ModelConfig(model_id="Qwen/Qwen-Image", origin_file_pattern="tokenizer/"),
)

dataset_snapshot_download(
    dataset_id="DiffSynth-Studio/example_image_dataset",
    local_dir="./data/example_image_dataset",
    allow_file_pattern="depth/image_1.jpg"
)

controlnet_image = Image.open("data/example_image_dataset/depth/image_1.jpg").resize((1328, 1328))
```

prompt = "Exquisite portrait, underwater girl, flowing blue dress, gently floating hair, translucent lighting, surrounded by bubbles, serene expression, intricate details, dreamy and ethereal."
image = pipe(
    prompt, seed=0,
    blockwise_controlnet_inputs=[ControlNetInput(image=controlnet_image)]
)
image.save("image.jpg")