This model was converted and quantized with Quanta (A Hugston production). The weights were converted to F32 first.
The author of abliteration: https://huggingface.co/amkkk/qwen3.5-0.8b-abliterated-alllayers.
The model is for testing purposes only, used it at our own responsability and discretion.
Keep away from children.
Tested in abliteration queries in general,(seems to keep the accuracy, check example in screenshot).
This model can be used with HugstonOne.
Accuracy better at q6_k
- Downloads last month
- 36
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
