File size: 12,355 Bytes
bd29eb9 4c1e319 bd29eb9 4c1e319 ef7825d 4c1e319 bd29eb9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
---
library_name: pytorch
license: other
tags:
- foundation
- android
pipeline_tag: automatic-speech-recognition
---

# Whisper-Small: Optimized for Qualcomm Devices
HuggingFace Whisper-Small ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. This model is based on the transformer architecture and has been optimized for edge inference by replacing Multi-Head Attention (MHA) with Single-Head Attention (SHA) and linear layers with convolutional (conv) layers. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a max decoded length specified below.
This is based on the implementation of Whisper-Small found [here](https://github.com/huggingface/transformers/tree/v4.42.3/src/transformers/models/whisper).
This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/whisper_small) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary).
Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
## Getting Started
There are two ways to deploy this model on your device:
### Option 1: Download Pre-Exported Models
Below are pre-exported model assets ready for deployment.
| Runtime | Precision | Chipset | SDK Versions | Download |
|---|---|---|---|---|
| PRECOMPILED_QNN_ONNX | float | Snapdragon® X Elite | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-precompiled_qnn_onnx-float-qualcomm_snapdragon_x_elite.zip)
| PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Gen 3 Mobile | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-precompiled_qnn_onnx-float-qualcomm_snapdragon_8gen3.zip)
| PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS8550 (Proxy) | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-precompiled_qnn_onnx-float-qualcomm_qcs8550_proxy.zip)
| PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-precompiled_qnn_onnx-float-qualcomm_snapdragon_8_elite_for_galaxy.zip)
| PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-precompiled_qnn_onnx-float-qualcomm_snapdragon_8_elite_gen5.zip)
| PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS9075 | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-precompiled_qnn_onnx-float-qualcomm_qcs9075.zip)
| QNN_CONTEXT_BINARY | float | Snapdragon® X Elite | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-qnn_context_binary-float-qualcomm_snapdragon_x_elite.zip)
| QNN_CONTEXT_BINARY | float | Snapdragon® 8 Gen 3 Mobile | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-qnn_context_binary-float-qualcomm_snapdragon_8gen3.zip)
| QNN_CONTEXT_BINARY | float | Qualcomm® QCS8275 (Proxy) | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-qnn_context_binary-float-qualcomm_qcs8275_proxy.zip)
| QNN_CONTEXT_BINARY | float | Qualcomm® QCS8550 (Proxy) | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-qnn_context_binary-float-qualcomm_qcs8550_proxy.zip)
| QNN_CONTEXT_BINARY | float | Qualcomm® SA8775P | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-qnn_context_binary-float-qualcomm_sa8775p.zip)
| QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite For Galaxy Mobile | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-qnn_context_binary-float-qualcomm_snapdragon_8_elite_for_galaxy.zip)
| QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite Gen 5 Mobile | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-qnn_context_binary-float-qualcomm_snapdragon_8_elite_gen5.zip)
| QNN_CONTEXT_BINARY | float | Qualcomm® SA7255P | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-qnn_context_binary-float-qualcomm_sa7255p.zip)
| QNN_CONTEXT_BINARY | float | Qualcomm® SA8295P | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-qnn_context_binary-float-qualcomm_sa8295p.zip)
| QNN_CONTEXT_BINARY | float | Qualcomm® QCS9075 | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-qnn_context_binary-float-qualcomm_qcs9075.zip)
| QNN_CONTEXT_BINARY | float | Qualcomm® QCS8450 (Proxy) | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/whisper_small/releases/v0.46.0/whisper_small-qnn_context_binary-float-qualcomm_qcs8450_proxy.zip)
For more device-specific assets and performance metrics, visit **[Whisper-Small on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/whisper_small)**.
### Option 2: Export with Custom Configurations
Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/whisper_small) Python library to compile and export the model with your own:
- Custom weights (e.g., fine-tuned checkpoints)
- Custom input shapes
- Target device and runtime configurations
This option is ideal if you need to customize the model beyond the default configuration provided here.
See our repository for [Whisper-Small on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/whisper_small) for usage instructions.
## Model Details
**Model Type:** Model_use_case.speech_recognition
**Model Stats:**
- Model checkpoint: openai/whisper-small
- Input resolution: 80x3000 (30 seconds audio)
- Max decoded sequence length: 200 tokens
- Number of parameters (HfWhisperEncoder): 102M
- Model size (HfWhisperEncoder) (float): 391 MB
- Number of parameters (HfWhisperDecoder): 139M
- Model size (HfWhisperDecoder) (float): 533 MB
## Performance Summary
| Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
|---|---|---|---|---|---|---
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® X Elite | 10.402 ms | 286 - 286 MB | NPU
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Gen 3 Mobile | 10.066 ms | 56 - 64 MB | NPU
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS8550 (Proxy) | 12.842 ms | 62 - 63 MB | NPU
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS9075 | 13.707 ms | 60 - 122 MB | NPU
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 8.503 ms | 59 - 70 MB | NPU
| HfWhisperDecoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 7.462 ms | 75 - 86 MB | NPU
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Snapdragon® X Elite | 9.994 ms | 60 - 60 MB | NPU
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Gen 3 Mobile | 9.674 ms | 60 - 68 MB | NPU
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8275 (Proxy) | 18.527 ms | 54 - 63 MB | NPU
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8550 (Proxy) | 11.89 ms | 29 - 31 MB | NPU
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8775P | 13.382 ms | 55 - 64 MB | NPU
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS9075 | 13.216 ms | 60 - 128 MB | NPU
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8450 (Proxy) | 17.65 ms | 39 - 47 MB | NPU
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA7255P | 18.527 ms | 54 - 63 MB | NPU
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8295P | 15.025 ms | 54 - 60 MB | NPU
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite For Galaxy Mobile | 8.262 ms | 0 - 9 MB | NPU
| HfWhisperDecoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite Gen 5 Mobile | 7.206 ms | 60 - 71 MB | NPU
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® X Elite | 130.719 ms | 226 - 226 MB | NPU
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Gen 3 Mobile | 105.285 ms | 109 - 117 MB | NPU
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS8550 (Proxy) | 135.287 ms | 1 - 258 MB | NPU
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Qualcomm® QCS9075 | 156.352 ms | 127 - 131 MB | NPU
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 79.543 ms | 129 - 141 MB | NPU
| HfWhisperEncoder | PRECOMPILED_QNN_ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 62.455 ms | 126 - 136 MB | NPU
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Snapdragon® X Elite | 114.572 ms | 0 - 0 MB | NPU
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Gen 3 Mobile | 82.377 ms | 1 - 8 MB | NPU
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8275 (Proxy) | 404.688 ms | 0 - 8 MB | NPU
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8550 (Proxy) | 113.951 ms | 0 - 4 MB | NPU
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8775P | 135.906 ms | 0 - 9 MB | NPU
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS9075 | 136.123 ms | 0 - 55 MB | NPU
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® QCS8450 (Proxy) | 280.226 ms | 1 - 10 MB | NPU
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA7255P | 404.688 ms | 0 - 8 MB | NPU
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Qualcomm® SA8295P | 204.47 ms | 1 - 6 MB | NPU
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite For Galaxy Mobile | 59.713 ms | 1 - 14 MB | NPU
| HfWhisperEncoder | QNN_CONTEXT_BINARY | float | Snapdragon® 8 Elite Gen 5 Mobile | 45.361 ms | 1 - 10 MB | NPU
## License
* The license for the original implementation of Whisper-Small can be found
[here](https://github.com/huggingface/transformers/blob/v4.42.3/LICENSE).
## References
* [Robust Speech Recognition via Large-Scale Weak Supervision](https://cdn.openai.com/papers/whisper.pdf)
* [Source Model Implementation](https://github.com/huggingface/transformers/tree/v4.42.3/src/transformers/models/whisper)
## Community
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
* For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
|