π¬ Wan2.1 Distilled Models
β‘ High-Performance Video Generation with 4-Step Inference
Distillation-accelerated versions of Wan2.1 - Dramatically faster while maintaining exceptional quality
π What's Special?
π¦ Model Catalog
π₯ Model Types
π― Precision Variants
| Precision | Model Identifier | Model Size | Framework | Quality vs Speed |
|---|---|---|---|---|
| π BF16 | lightx2v_4step |
~28-32 GB | LightX2V | βββββ Highest quality |
| β‘ FP8 | scaled_fp8_e4m3_lightx2v_4step |
~15-17 GB | LightX2V | ββββ Excellent balance |
| π― INT8 | int8_lightx2v_4step |
~15-17 GB | LightX2V | ββββ Fast & efficient |
| π· FP8 ComfyUI | scaled_fp8_e4m3_lightx2v_4step_comfyui |
~15-17 GB | ComfyUI | βββ ComfyUI ready |
π Naming Convention
# Pattern: wan2.1_{task}_{resolution}_{precision}.safetensors
# Examples:
wan2.1_i2v_720p_lightx2v_4step.safetensors # 720P I2V - BF16
wan2.1_i2v_720p_scaled_fp8_e4m3_lightx2v_4step.safetensors # 720P I2V - FP8
wan2.1_i2v_480p_int8_lightx2v_4step.safetensors # 480P I2V - INT8
wan2.1_t2v_14b_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors # T2V - FP8 ComfyUI
π‘ Explore all models: Browse Full Model Collection β
π Usage
LightX2V is a high-performance inference framework optimized for these models, approximately 2x faster than ComfyUI with better quantization accuracy. Highly recommended!
Quick Start
- Download model (720P I2V FP8 example)
huggingface-cli download lightx2v/Wan2.1-Distill-Models \
--local-dir ./models/wan2.1_i2v_720p \
--include "wan2.1_i2v_720p_scaled_fp8_e4m3_lightx2v_4step.safetensors"
- Clone LightX2V repository
git clone https://github.com/ModelTC/LightX2V.git
cd LightX2V
- Install dependencies
pip install -r requirements.txt
Or refer to Quick Start Documentation to use docker
- Select and modify configuration file
Choose the appropriate configuration based on your GPU memory:
For 80GB+ GPU (A100/H100)
For 24GB+ GPU (RTX 4090)
- Run inference
cd scripts
bash wan/run_wan_i2v_distill_4step_cfg.sh
Documentation
- Quick Start Guide: LightX2V Quick Start
- Complete Usage Guide: LightX2V Model Structure Documentation
- Configuration Guide: Configuration Files
- Quantization Usage: Quantization Documentation
- Parameter Offload: Offload Documentation
Performance Advantages
- β‘ Fast: Approximately 2x faster than ComfyUI
- π― Optimized: Deeply optimized for distilled models
- πΎ Memory Efficient: Supports CPU offload and other memory optimization techniques
- π οΈ Flexible: Supports multiple quantization formats and configuration options
Community
β οΈ Important Notes
Additional Components: These models only contain DIT weights. You also need:
- T5 text encoder
- CLIP vision encoder
- VAE encoder/decoder
- Tokenizers
Refer to LightX2V Documentation for how to organize the complete model directory.
If you find this project helpful, please give us a β on GitHub
- Downloads last month
- 3,132
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for lightx2v/Wan2.1-Distill-Models
Base model
Wan-AI/Wan2.1-I2V-14B-480PCollection including lightx2v/Wan2.1-Distill-Models
Collection
4 items
β’
Updated
β’
1
