Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: transformers
|
| 3 |
+
tags:
|
| 4 |
+
- phi-3
|
| 5 |
+
- phi-3-medium
|
| 6 |
+
- phi-3-medium-4k-instruct
|
| 7 |
+
- conversational
|
| 8 |
+
- text-generation-inference
|
| 9 |
+
pipeline_tag: text-generation
|
| 10 |
+
language:
|
| 11 |
+
- en
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
Official quantization of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using [PV-Tuning](https://arxiv.org/abs/2405.14852) on top of [AQLM](https://arxiv.org/abs/2401.06118).
|
| 15 |
+
|
| 16 |
+
For this quantization, we used 1 codebook of 16 bits for groups of 8 weights.
|
| 17 |
+
|
| 18 |
+
Results (0-shot `acc`):
|
| 19 |
+
|
| 20 |
+
Results:
|
| 21 |
+
| Model | Quantization | WikiText-2 | C4 | Model size, Gb |
|
| 22 |
+
|------|------|-------|------|------|
|
| 23 |
+
| [microsoft/Phi-3-medium-4k-instruct](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) | None | | | 27.9 |
|
| 24 |
+
| | [1x16g8 (2-bit, this model)](https://huggingface.co/ISTA-DASLab/Phi-3-medium-4k-instruct-AQLM-PV-2Bit-1x16-hf) | 5.18 | 8.56 | 4.2Gb |
|
| 25 |
+
| | [1x16g16 (1-bit, model link)](https://huggingface.co/ISTA-DASLab/Phi-3-medium-4k-instruct-AQLM-PV-1Bit-1x16-hf) | 7.42 | 10.40 | 2.7Gb |
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
In general, we always recommend the 2-bit models for best accuracy-size trade-offs. If tempted to use the 1-bit model, try a smaller model ,
|
| 29 |
+
e.g. Phi-3-**mini** quantized with AQLM+PV [(quantized model link)](https://huggingface.co/ISTA-DASLab/Phi-3-mini-4k-instruct-AQLM-PV-2Bit-1x16-hf) and compare the results, or check our [AQLM+PV collection](https://huggingface.co/collections/ISTA-DASLab/aqlmpv-66564dff5d84f00a893ba93f) for a more appropriate size.
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM).
|
| 33 |
+
The original code for PV-Tuning can be found in the [AQLM@pv-tuning](https://github.com/Vahe1994/AQLM/tree/pv-tuning) branch.
|