upload int8 onnx model
Browse filesSigned-off-by: yuwenzho <yuwen.zhou@intel.com>
- README.md +26 -3
- model.onnx +3 -0
README.md
CHANGED
|
@@ -29,7 +29,9 @@ model_index:
|
|
| 29 |
---
|
| 30 |
# INT8 albert-base-v2-sst2
|
| 31 |
|
| 32 |
-
##
|
|
|
|
|
|
|
| 33 |
|
| 34 |
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
|
| 35 |
|
|
@@ -39,14 +41,14 @@ The calibration dataloader is the train dataloader. The default calibration samp
|
|
| 39 |
|
| 40 |
The linear modules **albert.encoder.albert_layer_groups.0.albert_layers.0.ffn_output.module, albert.encoder.albert_layer_groups.0.albert_layers.0.ffn.module** fall back to fp32 to meet the 1% relative accuracy loss.
|
| 41 |
|
| 42 |
-
### Test result
|
| 43 |
|
| 44 |
| |INT8|FP32|
|
| 45 |
|---|:---:|:---:|
|
| 46 |
| **Accuracy (eval-accuracy)** |0.9255|0.9232|
|
| 47 |
| **Model size (MB)** |25|44.6|
|
| 48 |
|
| 49 |
-
### Load with Intel® Neural Compressor:
|
| 50 |
|
| 51 |
```python
|
| 52 |
from optimum.intel.neural_compressor import IncQuantizedModelForSequenceClassification
|
|
@@ -54,3 +56,24 @@ from optimum.intel.neural_compressor import IncQuantizedModelForSequenceClassifi
|
|
| 54 |
model_id = "Intel/albert-base-v2-sst2-int8-static"
|
| 55 |
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(model_id)
|
| 56 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
---
|
| 30 |
# INT8 albert-base-v2-sst2
|
| 31 |
|
| 32 |
+
## Post-training static quantization
|
| 33 |
+
|
| 34 |
+
### PyTorch
|
| 35 |
|
| 36 |
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
|
| 37 |
|
|
|
|
| 41 |
|
| 42 |
The linear modules **albert.encoder.albert_layer_groups.0.albert_layers.0.ffn_output.module, albert.encoder.albert_layer_groups.0.albert_layers.0.ffn.module** fall back to fp32 to meet the 1% relative accuracy loss.
|
| 43 |
|
| 44 |
+
#### Test result
|
| 45 |
|
| 46 |
| |INT8|FP32|
|
| 47 |
|---|:---:|:---:|
|
| 48 |
| **Accuracy (eval-accuracy)** |0.9255|0.9232|
|
| 49 |
| **Model size (MB)** |25|44.6|
|
| 50 |
|
| 51 |
+
#### Load with Intel® Neural Compressor:
|
| 52 |
|
| 53 |
```python
|
| 54 |
from optimum.intel.neural_compressor import IncQuantizedModelForSequenceClassification
|
|
|
|
| 56 |
model_id = "Intel/albert-base-v2-sst2-int8-static"
|
| 57 |
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(model_id)
|
| 58 |
```
|
| 59 |
+
|
| 60 |
+
### ONNX
|
| 61 |
+
|
| 62 |
+
This is an INT8 ONNX model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
|
| 63 |
+
|
| 64 |
+
The original fp32 model comes from the fine-tuned model [Alireza1044/albert-base-v2-sst2](https://huggingface.co/Alireza1044/albert-base-v2-sst2).
|
| 65 |
+
|
| 66 |
+
#### Test result
|
| 67 |
+
|
| 68 |
+
| |INT8|FP32|
|
| 69 |
+
|---|:---:|:---:|
|
| 70 |
+
| **Accuracy (eval-f1)** |0.9186|0.9232|
|
| 71 |
+
| **Model size (MB)** |89|45|
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
#### Load ONNX model:
|
| 75 |
+
|
| 76 |
+
```python
|
| 77 |
+
from optimum.onnxruntime import ORTModelForSequenceClassification
|
| 78 |
+
model = ORTModelForSequenceClassification.from_pretrained('Intel/albert-base-v2-sst2-int8-static')
|
| 79 |
+
```
|
model.onnx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:df3f220d199dd52af87203ab564efe7584c6fd514fef93d1d5b3772c47c14145
|
| 3 |
+
size 92596546
|