Whisper-Large-V3-Turbo: Optimized for Qualcomm Devices
Whisper large-v3-turbo is a finetuned version of a pruned Whisper large-v3. In other words, it's the exact same model, except that the number of decoding layers have reduced from 32 to 4. As a result, the model is way faster, at the expense of a minor quality degradation. This model is based on the transformer architecture and has been optimized for edge inference by replacing Multi-Head Attention (MHA) with Single-Head Attention (SHA) and linear layers with convolutional (conv) layers. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a max decoded length specified below.
This is based on the implementation of Whisper-Large-V3-Turbo found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.
Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.
Getting Started
There are two ways to deploy this model on your device:
Option 1: Download Pre-Exported Models
Below are pre-exported model assets ready for deployment.
| Runtime | Precision | Chipset | SDK Versions | Download |
|---|---|---|---|---|
| PRECOMPILED_QNN_ONNX | float | Snapdragon® X Elite | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Gen 3 Mobile | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS8550 (Proxy) | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS9075 | QAIRT 2.37, ONNX Runtime 1.23.0 | Download |
| QNN_CONTEXT_BINARY | float | Snapdragon® X Elite | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Snapdragon® 8 Gen 3 Mobile | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Qualcomm® QCS8550 (Proxy) | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Qualcomm® SA8775P | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite For Galaxy Mobile | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite Gen 5 Mobile | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Qualcomm® SA8295P | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Qualcomm® QCS9075 | QAIRT 2.42 | Download |
| QNN_CONTEXT_BINARY | float | Qualcomm® QCS8450 (Proxy) | QAIRT 2.42 | Download |
For more device-specific assets and performance metrics, visit Whisper-Large-V3-Turbo on Qualcomm® AI Hub.
Option 2: Export with Custom Configurations
Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:
- Custom weights (e.g., fine-tuned checkpoints)
- Custom input shapes
- Target device and runtime configurations
This option is ideal if you need to customize the model beyond the default configuration provided here.
See our repository for Whisper-Large-V3-Turbo on GitHub for usage instructions.
Model Details
Model Type: Model_use_case.speech_recognition
Model Stats:
- Model checkpoint: openai/whisper-large-v3-turbo
- Input resolution: 128x3000 (30 seconds audio)
- Max decoded sequence length: 200 tokens
Performance Summary
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit |
|---|---|---|---|---|---|---|
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® X Elite | 8.32 ms | 400 - 400 MB | NPU |
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Gen 3 Mobile | 7.965 ms | 43 - 54 MB | NPU |
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS8550 (Proxy) | 10.247 ms | 34 - 36 MB | NPU |
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS9075 | 10.575 ms | 33 - 69 MB | NPU |
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 6.988 ms | 22 - 34 MB | NPU |
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 6.367 ms | 42 - 53 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Snapdragon® X Elite | 8.194 ms | 33 - 33 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Gen 3 Mobile | 7.652 ms | 33 - 41 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8550 (Proxy) | 9.777 ms | 34 - 35 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8775P | 26.37 ms | 30 - 37 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS9075 | 10.152 ms | 33 - 72 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8450 (Proxy) | 16.493 ms | 33 - 42 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8295P | 11.852 ms | 28 - 33 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite For Galaxy Mobile | 6.622 ms | 4 - 13 MB | NPU |
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite Gen 5 Mobile | 6.125 ms | 33 - 42 MB | NPU |
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® X Elite | 771.515 ms | 1395 - 1395 MB | NPU |
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Gen 3 Mobile | 599.236 ms | 33 - 45 MB | NPU |
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS8550 (Proxy) | 808.579 ms | 0 - 1538 MB | NPU |
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS9075 | 912.364 ms | 1 - 4 MB | NPU |
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 471.772 ms | 63 - 75 MB | NPU |
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 378.435 ms | 59 - 68 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Snapdragon® X Elite | 580.506 ms | 1 - 1 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Gen 3 Mobile | 418.61 ms | 3 - 10 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8550 (Proxy) | 579.628 ms | 1 - 3 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8775P | 707.025 ms | 1 - 8 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS9075 | 699.41 ms | 1 - 32 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8450 (Proxy) | 1315.909 ms | 1 - 12 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8295P | 814.666 ms | 1 - 10 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite For Galaxy Mobile | 309.522 ms | 1 - 14 MB | NPU |
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite Gen 5 Mobile | 260.3 ms | 1 - 10 MB | NPU |
License
- The license for the original implementation of Whisper-Large-V3-Turbo can be found here.
References
Community
- Join our AI Hub Slack community to collaborate, post questions and learn more about on-device AI.
- For questions or feedback please reach out to us.
