Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,7 @@ tags:
|
|
| 8 |
- text-to-video
|
| 9 |
- video-to-video
|
| 10 |
- realtime
|
|
|
|
| 11 |
---
|
| 12 |
Krea Realtime 14B is distilled from the [Wan 2.1 14B text-to-video model](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) using Self-Forcing, a technique for converting regular video diffusion models into autoregressive models. It achieves a text-to-video inference speed of **11fps** using 4 inference steps on a single NVIDIA B200 GPU. For more details on our training methodology and sampling innovations, refer to our [technical blog post](https://www.krea.ai/blog/krea-realtime-14b).
|
| 13 |
|
|
@@ -173,4 +174,4 @@ for block_idx in range(num_blocks):
|
|
| 173 |
frames.extend(state.values["videos"][0])
|
| 174 |
|
| 175 |
export_to_video(frames, "output.mp4", fps=16)
|
| 176 |
-
```
|
|
|
|
| 8 |
- text-to-video
|
| 9 |
- video-to-video
|
| 10 |
- realtime
|
| 11 |
+
library_name: diffusers
|
| 12 |
---
|
| 13 |
Krea Realtime 14B is distilled from the [Wan 2.1 14B text-to-video model](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) using Self-Forcing, a technique for converting regular video diffusion models into autoregressive models. It achieves a text-to-video inference speed of **11fps** using 4 inference steps on a single NVIDIA B200 GPU. For more details on our training methodology and sampling innovations, refer to our [technical blog post](https://www.krea.ai/blog/krea-realtime-14b).
|
| 14 |
|
|
|
|
| 174 |
frames.extend(state.values["videos"][0])
|
| 175 |
|
| 176 |
export_to_video(frames, "output.mp4", fps=16)
|
| 177 |
+
```
|