Qwen-3-4B-Compilation-Experimental GGUF
GGUF conversions and quantized builds of Qwen-3-4B-Compilation-Experimental, generated from tikeape/Qwen-3-4B-Compilation-Experimental using llama.cpp.
Included files
| File | Quantization |
|---|---|
Qwen-3-4B-Compilation-Experimental-gguf-f16.gguf |
f16 |
Qwen-3-4B-Compilation-Experimental-gguf-q4_0.gguf |
q4_0 |
Notes
- Filenames follow the pattern
base-gguf-<quant>.gguf. - Quantizations included:
f16,q4_0.
- Downloads last month
- 5
Hardware compatibility
Log In to add your hardware
4-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for tikeape/Qwen-3-4B-Compilation-Experimental-GGUF
Base model
tikeape/Qwen-3-4B-Compilation-Experimental