Text-to-Image
Diffusers
Safetensors
LibreFluxIPAdapterPipeline

LibreFLUX-IP-Adapter-ControlNet

Example: Control image vs result

This model/pipeline combines my LibreFlux-IP-Adapter and LibreFlux ControlNet, into a single pipeline. LibreFLUX is used as the underlying Transformer model.

How does this relate to LibreFLUX?

  • Base model is LibreFLUX
  • Trained in same non-distilled fashion
  • Uses Attention Masking
  • Uses CFG during Inference

Compatibility

pip install -U diffusers==0.35.2
pip install -U transformers==4.57.1

Low VRAM:

pip install optimum.quanto

Load Pipeline

import torch
from diffusers import DiffusionPipeline
from huggingface_hub import hf_hub_download

model_id = "neuralvfx/LibreFlux-IP-Adapter-ControlNet"

device = "cuda" if torch.cuda.is_available() else "cpu"
dtype  = torch.bfloat16 if device == "cuda" else torch.float32

pipe = DiffusionPipeline.from_pretrained(
    model_id,
    custom_pipeline=model_id,
    trust_remote_code=True,      
    torch_dtype=dtype,
    safety_checker=None       
)

# Optional way to download the weights
hf_hub_download(repo_id="neuralvfx/LibreFlux-IP-Adapter-ControlNet",
 filename="ip_adapter.pt",
 local_dir=".",
 local_dir_use_symlinks=False)

pipe.load_ip_adapter('ip_adapter.pt')

pipe.to(device)

Inference

from PIL import Image
from torchvision.transforms import ToTensor


# Optional way to download test Control Net Image
hf_hub_download(repo_id="neuralvfx/LibreFlux-IP-Adapter-ControlNet",
 filename="examples/libre_flux_control_image.png",
 local_dir=".",
 local_dir_use_symlinks=False)

# Load Control Image
cond = Image.open("examples/libre_flux_control_image.png").convert("RGB")
cond = cond.resize((1024, 1024))

# Optional way to download test IP Adapter Image
hf_hub_download(repo_id="neuralvfx/LibreFlux-IP-Adapter-ControlNet",
 filename="examples/merc.jpeg",
 local_dir=".",
 local_dir_use_symlinks=False)

# Load IP Adapter Image
ip_image = Image.open("examples/merc.jpeg").convert("RGB")
ip_image = ip_image.resize((512, 512))

out = pipe(
  prompt="the words libre flux",
            negative_prompt="blurry",
            control_image=cond,  # Use the tensor here
            num_inference_steps=75,
            guidance_scale=4.0,
            controlnet_conditioning_scale=1.0,
            ip_adapter_image=ip_image, 
            ip_adapter_scale=1.0,
            num_images_per_prompt=1,
            generator= torch.Generator().manual_seed(74),
            return_dict=True,
        )
out.images[0]

Load Pipeline ( Low VRAM )

import torch
from huggingface_hub import hf_hub_download
from diffusers import DiffusionPipeline
from optimum.quanto import freeze, quantize, qint8

model_id = "neuralvfx/LibreFlux-IP-Adapter-ControlNet"

device = "cuda" if torch.cuda.is_available() else "cpu"
dtype  = torch.bfloat16 if device == "cuda" else torch.float32

pipe = DiffusionPipeline.from_pretrained(
    model_id,
    custom_pipeline=model_id,
    trust_remote_code=True,      
    torch_dtype=dtype,
    safety_checker=None         
)

# Optional way to download the weights
hf_hub_download(repo_id="neuralvfx/LibreFlux-IP-Adapter-ControlNet",
 filename="ip_adapter.pt",
 local_dir=".",
 local_dir_use_symlinks=False)

# Load the IP Adapter First
pipe.load_ip_adapter('ip_adapter.pt')

# Quantize and Freeze
quantize(
    pipe.transformer,
    weights=qint8,
    exclude=[
        "*.norm", "*.norm1", "*.norm2", "*.norm2_context",
        "proj_out", "x_embedder", "norm_out", "context_embedder",
    ],
)

quantize(
    pipe.ip_adapter,
    weights=qint8,
    exclude=[
        "*.norm", "*.norm1", "*.norm2", "*.norm2_context",
        "proj_out", "x_embedder", "norm_out", "context_embedder",
    ],
)

quantize(
    pipe.controlnet,
    weights=qint8,
    exclude=[
        "*.norm", "*.norm1", "*.norm2", "*.norm2_context",
        "proj_out", "x_embedder", "norm_out", "context_embedder",
    ],
)

freeze(pipe.transformer)
freeze(pipe.ip_adapter)
freeze(pipe.controlnet)

# Enable Model Offloading
pipe.enable_model_cpu_offload()
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for neuralvfx/LibreFlux-IP-Adapter-ControlNet

Finetuned
(3)
this model

Dataset used to train neuralvfx/LibreFlux-IP-Adapter-ControlNet