qwen-gguf / README.md
sayakpaul's picture
sayakpaul HF Staff
Update README.md
afc6610 verified

Original model: Qwen/Qwen-Image

Code to run this
from diffusers import QwenImageTransformer2DModel, GGUFQuantizationConfig, DiffusionPipeline
import torch 

ckpt_id = "Qwen/Qwen-Image"
transformer = QwenImageTransformer2DModel.from_single_file(
    "https://huggingface.co/sayakpaul/qwen-gguf/blob/main/qwen-q4.gguf", 
    quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
    torch_dtype=torch.bfloat16,
    config=ckpt_id,
    subfolder="transformer",
)
pipe = DiffusionPipeline.from_pretrained(ckpt_id, torch_dtype=torch.bfloat16).to("cuda")
prompt = "stock photo of two people, a man and a woman, wearing lab coats writing on a white board with markers, the white board has text that reads 'The Diffusers library by Hugging Face makes it easy for developers to run image generation and inference using state-of-the-art diffusion models with just a few lines of code' with sloppy writing and traces clearly made by a human. The photo is taken from the side and has depth of field so some parts of the board looks blurred giving it a more professional look"

image = pipe(
    prompt=prompt,
    negative_prompt="negative_prompt",
    width=1024,
    height=1024,
    num_inference_steps=25,
    true_cfg_scale=4.0,
    generator=torch.manual_seed(0),
).images[0]
image.save("gguf_qwen.png")

Make sure you have Diffusers installed from main.