--- library_name: pytorch license: other tags: - backbone - android pipeline_tag: image-classification --- ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/vit/web-assets/model_demo.png) # VIT: Optimized for Qualcomm Devices VIT is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases. This is based on the implementation of VIT found [here](https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py). This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/vit) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary). Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device. ## Getting Started There are two ways to deploy this model on your device: ### Option 1: Download Pre-Exported Models Below are pre-exported model assets ready for deployment. | Runtime | Precision | Chipset | SDK Versions | Download | |---|---|---|---|---| | ONNX | float | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/vit/releases/v0.46.0/vit-onnx-float.zip) | ONNX | w8a16 | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/vit/releases/v0.46.0/vit-onnx-w8a16.zip) | ONNX | w8a8 | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/vit/releases/v0.46.0/vit-onnx-w8a8.zip) | ONNX | w8a8_mixed_int16 | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/vit/releases/v0.46.0/vit-onnx-w8a8_mixed_int16.zip) | QNN_DLC | float | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/vit/releases/v0.46.0/vit-qnn_dlc-float.zip) | QNN_DLC | w8a16 | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/vit/releases/v0.46.0/vit-qnn_dlc-w8a16.zip) | QNN_DLC | w8a8 | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/vit/releases/v0.46.0/vit-qnn_dlc-w8a8.zip) | TFLITE | float | Universal | QAIRT 2.42, TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/vit/releases/v0.46.0/vit-tflite-float.zip) | TFLITE | w8a8 | Universal | QAIRT 2.42, TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/vit/releases/v0.46.0/vit-tflite-w8a8.zip) For more device-specific assets and performance metrics, visit **[VIT on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/vit)**. ### Option 2: Export with Custom Configurations Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/vit) Python library to compile and export the model with your own: - Custom weights (e.g., fine-tuned checkpoints) - Custom input shapes - Target device and runtime configurations This option is ideal if you need to customize the model beyond the default configuration provided here. See our repository for [VIT on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/vit) for usage instructions. ## Model Details **Model Type:** Model_use_case.image_classification **Model Stats:** - Model checkpoint: Imagenet - Input resolution: 224x224 - Number of parameters: 86.6M - Model size (float): 330 MB - Model size (w8a16): 86.2 MB - Model size (w8a8): 83.2 MB ## Performance Summary | Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit |---|---|---|---|---|---|--- | VIT | ONNX | float | Snapdragon® X Elite | 13.861 ms | 171 - 171 MB | NPU | VIT | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 9.059 ms | 1 - 476 MB | NPU | VIT | ONNX | float | Qualcomm® QCS8550 (Proxy) | 13.228 ms | 0 - 194 MB | NPU | VIT | ONNX | float | Qualcomm® QCS9075 | 17.552 ms | 0 - 4 MB | NPU | VIT | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 6.352 ms | 1 - 423 MB | NPU | VIT | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 4.841 ms | 0 - 417 MB | NPU | VIT | ONNX | w8a16 | Snapdragon® X Elite | 153.036 ms | 71 - 71 MB | NPU | VIT | ONNX | w8a16 | Snapdragon® 8 Gen 3 Mobile | 228.999 ms | 61 - 308 MB | NPU | VIT | ONNX | w8a16 | Qualcomm® QCS6490 | 1108.299 ms | 46 - 74 MB | CPU | VIT | ONNX | w8a16 | Qualcomm® QCS8550 (Proxy) | 273.019 ms | 56 - 59 MB | NPU | VIT | ONNX | w8a16 | Qualcomm® QCS9075 | 244.227 ms | 65 - 67 MB | NPU | VIT | ONNX | w8a16 | Qualcomm® QCM6690 | 622.471 ms | 89 - 107 MB | CPU | VIT | ONNX | w8a16 | Snapdragon® 8 Elite For Galaxy Mobile | 199.067 ms | 61 - 218 MB | NPU | VIT | ONNX | w8a16 | Snapdragon® 7 Gen 4 Mobile | 593.838 ms | 53 - 68 MB | CPU | VIT | ONNX | w8a16 | Snapdragon® 8 Elite Gen 5 Mobile | 183.415 ms | 57 - 218 MB | NPU | VIT | ONNX | w8a8 | Snapdragon® X Elite | 141.184 ms | 69 - 69 MB | NPU | VIT | ONNX | w8a8 | Snapdragon® 8 Gen 3 Mobile | 226.057 ms | 27 - 244 MB | NPU | VIT | ONNX | w8a8 | Qualcomm® QCS6490 | 872.087 ms | 66 - 101 MB | CPU | VIT | ONNX | w8a8 | Qualcomm® QCS8550 (Proxy) | 258.475 ms | 54 - 57 MB | NPU | VIT | ONNX | w8a8 | Qualcomm® QCS9075 | 237.249 ms | 60 - 63 MB | NPU | VIT | ONNX | w8a8 | Qualcomm® QCM6690 | 481.693 ms | 36 - 53 MB | CPU | VIT | ONNX | w8a8 | Snapdragon® 8 Elite For Galaxy Mobile | 198.772 ms | 42 - 187 MB | NPU | VIT | ONNX | w8a8 | Snapdragon® 7 Gen 4 Mobile | 462.71 ms | 59 - 78 MB | CPU | VIT | ONNX | w8a8 | Snapdragon® 8 Elite Gen 5 Mobile | 180.511 ms | 60 - 211 MB | NPU | VIT | ONNX | w8a8_mixed_int16 | Snapdragon® X Elite | 287.294 ms | 137 - 137 MB | NPU | VIT | ONNX | w8a8_mixed_int16 | Snapdragon® 8 Gen 3 Mobile | 285.898 ms | 80 - 327 MB | NPU | VIT | ONNX | w8a8_mixed_int16 | Qualcomm® QCS6490 | 902.655 ms | 44 - 82 MB | CPU | VIT | ONNX | w8a8_mixed_int16 | Qualcomm® QCS8550 (Proxy) | 335.64 ms | 53 - 79 MB | NPU | VIT | ONNX | w8a8_mixed_int16 | Qualcomm® QCS9075 | 330.457 ms | 79 - 82 MB | NPU | VIT | ONNX | w8a8_mixed_int16 | Qualcomm® QCM6690 | 505.935 ms | 86 - 106 MB | CPU | VIT | ONNX | w8a8_mixed_int16 | Snapdragon® 8 Elite For Galaxy Mobile | 246.071 ms | 79 - 239 MB | NPU | VIT | ONNX | w8a8_mixed_int16 | Snapdragon® 7 Gen 4 Mobile | 487.306 ms | 52 - 69 MB | CPU | VIT | ONNX | w8a8_mixed_int16 | Snapdragon® 8 Elite Gen 5 Mobile | 229.271 ms | 75 - 239 MB | NPU | VIT | QNN_DLC | float | Snapdragon® X Elite | 11.827 ms | 1 - 1 MB | NPU | VIT | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 7.671 ms | 0 - 362 MB | NPU | VIT | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 40.093 ms | 1 - 333 MB | NPU | VIT | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 11.114 ms | 1 - 3 MB | NPU | VIT | QNN_DLC | float | Qualcomm® SA8775P | 13.774 ms | 1 - 327 MB | NPU | VIT | QNN_DLC | float | Qualcomm® QCS9075 | 15.514 ms | 1 - 3 MB | NPU | VIT | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 19.081 ms | 0 - 351 MB | NPU | VIT | QNN_DLC | float | Qualcomm® SA7255P | 40.093 ms | 1 - 333 MB | NPU | VIT | QNN_DLC | float | Qualcomm® SA8295P | 17.032 ms | 1 - 334 MB | NPU | VIT | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 5.3 ms | 0 - 339 MB | NPU | VIT | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 4.083 ms | 1 - 348 MB | NPU | VIT | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 5.901 ms | 0 - 323 MB | NPU | VIT | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 627.572 ms | 5 - 49 MB | CPU | VIT | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 7.96 ms | 0 - 3 MB | NPU | VIT | TFLITE | float | Qualcomm® SA8775P | 11.092 ms | 0 - 294 MB | NPU | VIT | TFLITE | float | Qualcomm® QCS9075 | 11.74 ms | 0 - 174 MB | NPU | VIT | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 13.901 ms | 0 - 293 MB | NPU | VIT | TFLITE | float | Qualcomm® SA7255P | 627.572 ms | 5 - 49 MB | CPU | VIT | TFLITE | float | Qualcomm® SA8295P | 13.38 ms | 0 - 268 MB | NPU | VIT | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 3.943 ms | 0 - 296 MB | NPU | VIT | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 3.032 ms | 0 - 299 MB | NPU | VIT | TFLITE | w8a8 | Snapdragon® 8 Gen 3 Mobile | 4.707 ms | 0 - 189 MB | NPU | VIT | TFLITE | w8a8 | Qualcomm® QCS6490 | 57.7 ms | 1 - 99 MB | NPU | VIT | TFLITE | w8a8 | Qualcomm® QCS8275 (Proxy) | 14.295 ms | 0 - 90 MB | NPU | VIT | TFLITE | w8a8 | Qualcomm® QCS8550 (Proxy) | 6.762 ms | 0 - 3 MB | NPU | VIT | TFLITE | w8a8 | Qualcomm® SA8775P | 7.055 ms | 0 - 91 MB | NPU | VIT | TFLITE | w8a8 | Qualcomm® QCS9075 | 7.592 ms | 0 - 89 MB | NPU | VIT | TFLITE | w8a8 | Qualcomm® QCM6690 | 94.904 ms | 1 - 184 MB | NPU | VIT | TFLITE | w8a8 | Qualcomm® QCS8450 (Proxy) | 8.851 ms | 0 - 185 MB | NPU | VIT | TFLITE | w8a8 | Qualcomm® SA7255P | 14.295 ms | 0 - 90 MB | NPU | VIT | TFLITE | w8a8 | Qualcomm® SA8295P | 9.698 ms | 0 - 93 MB | NPU | VIT | TFLITE | w8a8 | Snapdragon® 8 Elite For Galaxy Mobile | 3.365 ms | 0 - 90 MB | NPU | VIT | TFLITE | w8a8 | Snapdragon® 7 Gen 4 Mobile | 20.194 ms | 1 - 71 MB | NPU | VIT | TFLITE | w8a8 | Snapdragon® 8 Elite Gen 5 Mobile | 2.273 ms | 0 - 95 MB | NPU ## License * The license for the original implementation of VIT can be found [here](https://github.com/pytorch/vision/blob/main/LICENSE). ## References * [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) * [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/vision_transformer.py) ## Community * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI. * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).