RIFE

This version of RIFE has been converted to run on the Axera NPU using w8a16 quantization.

This model has been optimized with the following LoRA:

Compatible with Pulsar2 version: 4.2

Convert tools links:

For those who are interested in model conversion, you can try to export axmodel through

Support Platform

Chips model cost
AX650 RIFE 200 ms

How to use

Download all files from this repository to the device


root@ax650:~/rife# tree
.
|-- model
|   `-- rife_x2_720p.axmodel
|-- video
|   `-- demo.mp4
|`-- run_axmodel.py
|`-- ms_ssim.py
|`-- build_config.json
|`-- requirements.txt


Inference

Input Data:
|-- video
|   `-- demo.mp4

Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)

root@ax650 ~/rife #python3 run_axmodel.py --model ./model/rife_x2_720p.axmodel --video ./video/demo.mp4
[INFO] Available providers:  ['AxEngineExecutionProvider']
[INFO] Using provider: AxEngineExecutionProvider
[INFO] Chip type: ChipType.MC50
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Engine version: 2.12.0s
[INFO] Model type: 2 (triple core)
[INFO] Compiler version: 4.2 77cdc0c2
input name: onnx::Slice_0
demo.mp4, 128.0 frames in total, 25.0FPS to 50.0FPS
The audio will be merged after interpolation process
 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 127/128.0 [01:38<00:00,  1.29it/s]

Output: [INFO]:

Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including AXERA-TECH/RIFE.axera