File size: 2,598 Bytes
4040b08
 
 
 
 
 
71ed17f
4040b08
a1d5a68
495f4a6
63fc4f0
71ed17f
 
661a3aa
69227c4
 
 
 
 
 
28b444f
eb6acfe
16f4f68
46378ad
 
 
 
eb6acfe
16f4f68
 
46378ad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eb6acfe
16f4f68
 
46378ad
3664450
46378ad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
license: apache-2.0
library_name: diffusers
pipeline_tag: text-to-image
datasets:
- SA1B
base_model: jimmycarter/LibreFLUX
---
# LibreFLUX-ControlNet
![Example: Control image vs result](examples/side_by_side.png)

This model/pipeline is the product of my [LibreFlux ControlNet training repo](https://github.com/NeuralVFX/LibreFLUX-ControlNet). For the dataset I auto labeled 165K images from the SA1B dataset, and trained for that many iterations. [LibreFLUX](https://huggingface.co/jimmycarter/LibreFLUX) is the base model.

**ControlNet trained on top of LibreFLUX**
- Uses Attention Masking
- Inference runs with CFG
- Trained on 165K segmented images from Meta's [SA1B Dataset](https://ai.meta.com/datasets/segment-anything/)
- Trained using: [https://github.com/NeuralVFX/LibreFLUX-ControlNet](https://github.com/NeuralVFX/LibreFLUX-ControlNet)
- Base model used: [https://huggingface.co/jimmycarter/LibreFLUX](https://huggingface.co/jimmycarter/LibreFLUX)
- Inference code adapted from: [https://github.com/bghira/SimpleTuner](https://github.com/bghira/SimpleTuner)
  
# Compatibility
```py
pip install -U diffusers==0.32.0
pip install -U "transformers @ git+https://github.com/huggingface/transformers@e15687fffe5c9d20598a19aeab721ae0a7580f8a"
```

# Load Pipeline
```py
import torch
from diffusers import DiffusionPipeline

model_id = "neuralvfx/LibreFlux-ControlNet"  
device = "cuda" if torch.cuda.is_available() else "cpu"

dtype  = torch.bfloat16 if device == "cuda" else torch.float32

pipe = DiffusionPipeline.from_pretrained(
    model_id,
    custom_pipeline=model_id,
    trust_remote_code=True,   
    torch_dtype=dtype,
    safety_checker=None        
).to(device)
```

# Inference
```py
from PIL import Image
# Load Control Image
cond = Image.open("examples/libre_flux_control_image.png")
cond = cond.resize((1024, 1024))

# Convert PIL image to tensor and move to device with correct dtype
cond_tensor = ToTensor()(cond)[:3,:,:].to(pipe.device, dtype=pipe.dtype).unsqueeze(0)

out = pipe(
  prompt="many pieces of drift wood spelling libre flux sitting casting shadow on the lumpy sandy beach with foot prints all over it",
            negative_prompt="blurry",
            control_image=cond_tensor,  # Use the tensor here
            num_inference_steps=75,
            guidance_scale=4.0,
            height =1024,
            width=1024,
            controlnet_conditioning_scale=1.0,
            num_images_per_prompt=1,
            control_mode=None,
            generator= torch.Generator().manual_seed(32),
            return_dict=True,
        )
out.images[0]
```