This model was converted and quantized with Quanta (A Hugston production). The weights were converted to F32 first.

The author of abliteration: https://huggingface.co/amkkk/qwen3.5-0.8b-abliterated-alllayers.

The model is for testing purposes only, used it at our own responsability and discretion.

Keep away from children.

Tested in abliteration queries in general,(seems to keep the accuracy, check example in screenshot).

This model can be used with HugstonOne.

Accuracy better at q6_k

image

Downloads last month
36
GGUF
Model size
0.8B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support