multimodalart HF Staff commited on
Commit
149872b
·
verified ·
1 Parent(s): 5b13273

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -8,6 +8,7 @@ tags:
8
  - text-to-video
9
  - video-to-video
10
  - realtime
 
11
  ---
12
  Krea Realtime 14B is distilled from the [Wan 2.1 14B text-to-video model](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) using Self-Forcing, a technique for converting regular video diffusion models into autoregressive models. It achieves a text-to-video inference speed of **11fps** using 4 inference steps on a single NVIDIA B200 GPU. For more details on our training methodology and sampling innovations, refer to our [technical blog post](https://www.krea.ai/blog/krea-realtime-14b).
13
 
@@ -173,4 +174,4 @@ for block_idx in range(num_blocks):
173
  frames.extend(state.values["videos"][0])
174
 
175
  export_to_video(frames, "output.mp4", fps=16)
176
- ```
 
8
  - text-to-video
9
  - video-to-video
10
  - realtime
11
+ library_name: diffusers
12
  ---
13
  Krea Realtime 14B is distilled from the [Wan 2.1 14B text-to-video model](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) using Self-Forcing, a technique for converting regular video diffusion models into autoregressive models. It achieves a text-to-video inference speed of **11fps** using 4 inference steps on a single NVIDIA B200 GPU. For more details on our training methodology and sampling innovations, refer to our [technical blog post](https://www.krea.ai/blog/krea-realtime-14b).
14
 
 
174
  frames.extend(state.values["videos"][0])
175
 
176
  export_to_video(frames, "output.mp4", fps=16)
177
+ ```