Llama.cpp hybrid layer quantization of Qwen3-4B-Thinking-2507 by Qwen

Original model: https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507

The hybrid quant employs different quantization levels on a per layer basis to increase flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simultaneously optimize quantized size and model performance. These quants were specifically optimized for the Qwen3 4B Thinking 2507 edge model for same file size as Q6_K quant with higher success rate on a set of curated test prompts.

The layer quants are as follows:

   Q5_K_L : Q5_K_M + attn_o = Q6_K
   Q6_K_S : Q6_K
   Q6_K_M : Q6_K_S + attn_v = Q8_0, ffn_d = Q8_0
   Q6_K_L : Q6_K_M + attn_o = Q8_0

   LAYER_TYPES='[
   [0 ,"Q6_K_M"],[1 ,"Q6_K_S"],[2 ,"Q6_K_S"],[3 ,"Q6_K_S"],[4 ,"Q5_K_L"],[5 ,"Q5_K_M"],
   [6 ,"Q5_K_M"],[7 ,"Q5_K_M"],[8, "Q5_K_M"],[9, "Q5_K_L"],[10,"Q5_K_M"],[11,"Q5_K_M"],
   [12,"Q5_K_L"],[13,"Q5_K_L"],[14,"Q5_K_L"],[15,"Q5_K_L"],[16,"Q5_K_L"],[17,"Q5_K_L"],
   [18,"Q6_K_S"],[19,"Q6_K_S"],[20,"Q6_K_S"],[21,"Q6_K_S"],[22,"Q6_K_S"],[23,"Q6_K_S"],
   [24,"Q6_K_M"],[25,"Q6_K_M"],[26,"Q6_K_M"],[27,"Q6_K_M"],[28,"Q6_K_M"],[29,"Q6_K_M"],
   [30,"Q6_K_L"],[31,"Q6_K_M"],[32,"Q6_K_L"],[33,"Q6_K_L"],[34,"Q6_K_L"],[35,"Q8_0"  ]
   ]'

   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K --layer-types-high"

These quants were optimized for accurate reasoning performance across a set of curated test prompts. The model achieved 100% success on the test set but does exhibit overthinking typical of most recent RL models. This model is by far the strongest RL model for its size I have seen to date, accurately handling problems even some 30G to 110G RL models get wrong.

Long context test:

Using Q8_0 KV quant on a 4070 this model correctly solved the long context problem: https://huggingface.co/steampunque/Qwen3-8B-Hybrid-GGUF/raw/main/Qwen3_Runescape_Massive_Prompt_85k.txt On a 4070 this is close to the largest prompt the model can process without running out of memory (even though KV sizes to ~106k tokens, as tokens processing progresses VRAM usage goes up dynamically and will eventually crash with an OOM much beyond an 85k token prompt with full offload on the 4070)

Comparison:

Quant size PPL Comment
Q6_K 3.3e9 11.6 default embed and output
Q6_K_H 3.3e9 11.7 improved performance over Q6_K

Evals of the model are available at https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
Qwen3-4B-Thinking-2507.Q6_K_H.gguf Q6_K_H 3.3e9 B Q6_K size with improved reasoning

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
25
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/Qwen3-4B-Thinking-2507-Hybrid-GGUF

Quantized
(77)
this model