Bielik-11B-v3.0-Instruct MLX (MXFP4)
MLX MXFP4 (mixed-precision 4-bit) quantized version of speakleash/Bielik-11B-v3.0-Instruct for Apple Silicon.
Model Details
| Property | Value |
|---|---|
| Original Model | speakleash/Bielik-11B-v3.0-Instruct |
| Format | MLX MXFP4 (4.25 bits/weight) |
| Size | ~5.5 GB |
| Peak Memory | ~6.0 GB |
| Generation Speed | ~28 tok/s (M3 Ultra) |
Smallest variant - ideal for Macs with limited memory (8-16GB).
Other Quantizations
| Variant | Size | Memory | Link |
|---|---|---|---|
| bf16 | 22 GB | 22.4 GB | LibraxisAI/Bielik-11B-v3.0-mlx-bf16 |
| q8 | 11 GB | 11.9 GB | LibraxisAI/Bielik-11B-v3.0-mlx-q8 |
| q5 | 7.2 GB | 7.8 GB | LibraxisAI/Bielik-11B-v3.0-mlx-q5 |
| mxfp4 (this) | 5.5 GB | 6.0 GB | - |
| q4 | 5.9 GB | 6.4 GB | LibraxisAI/Bielik-11B-v3.0-mlx-q4 |
Usage
pip install mlx-lm
mlx_lm.generate --model LibraxisAI/Bielik-11B-v3.0-mlx-mxfp4 --prompt "Cze艣膰, jak si臋 masz?"
mlx_lm.chat --model LibraxisAI/Bielik-11B-v3.0-mlx-mxfp4
from mlx_lm import load, generate
model, tokenizer = load("LibraxisAI/Bielik-11B-v3.0-mlx-mxfp4")
response = generate(model, tokenizer, prompt="Wyja艣nij czym jest sztuczna inteligencja.", max_tokens=256)
print(response)
About Bielik
Bielik is a Polish language model developed by SpeakLeash. This conversion enables native execution on Apple Silicon Macs using the MLX framework.
License
Apache 2.0 - see original model for terms.
Converted by LibraxisAI using mlx-lm
- Downloads last month
- 16
Model size
11B params
Tensor type
U8
路
U32 路
BF16 路
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for LibraxisAI/Bielik-11B-v3.0-mlx-mxfp4
Base model
speakleash/Bielik-11B-v3-Base-20250730
Finetuned
speakleash/Bielik-11B-v3.0-Instruct