Instructions to use mlx-community/gemma-4-e2b-5bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use mlx-community/gemma-4-e2b-5bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir gemma-4-e2b-5bit mlx-community/gemma-4-e2b-5bit
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
File size: 181 Bytes
fde47f7 | 1 2 3 4 5 6 7 8 9 10 11 | {
"bos_token_id": 2,
"do_sample": true,
"eos_token_id": 1,
"pad_token_id": 0,
"temperature": 1.0,
"top_k": 64,
"top_p": 0.95,
"transformers_version": "5.5.0.dev0"
}
|