Huihui-MoE
Collection
9 items • Updated • 4
Huihui4-8B-A4B is a lightweight MoE (Mixture of Experts) conversational model optimized from Google's gemma-4-26B-A4B-it architecture. Through expert pruning and supervised fine-tuning on high-quality dialogue data, this model significantly reduces computational overhead while preserving core reasoning and interaction capabilities. It is specifically designed for deployment on consumer-grade hardware and code-related conversational tasks.
This model is not an ablation variant.
Please use the latest version of ollama
You can use huihui_ai/huihui-4:8b directly,
ollama run huihui_ai/huihui-4:8b
| Parameter | Description |
|---|---|
| Base Model | google/gemma-4-26B-A4B-it |
| Total MoE Experts | 32 (pruned from the original 128) |
| Active Experts per Token | 8 (maintaining the A4B activation scale) |
| Model Positioning | Lightweight MoE conversational base / Consumer-hardware friendly |
calculate_perplexity script.vLLM / llama.cpp / HuggingFace TransformersFP16: < 18GBINT4/INT8 Quantized: < 6~9GB (compatible with mainstream single consumer GPUs)"prune → fine-tune → merge" pipeline.Huihui series. Future updates will involve multi-dataset integration and expert merging.python evaluate_perplexity_final.py --model_path ./google/gemma-4-26B-A4B-it
Model Path : ./google/gemma-4-26B-A4B-it
Eval Samples : 100
Max Length : 8192
| model | Fine-tuning steps | num_experts | Perplexity | Average Loss |
|---|---|---|---|---|
| gemma-4-26B-A4B-it | 0 | 128 | 1.5964 (+ 0 ) | 0.4678 (+ 0 ) |
| gemma-4-26B-A4B-it-Pruned-32 | 0 | 32 | 2.4826 (+ 0.8862) | 0.9093 (+ 0.4415) |
| gemma-4-26B-A4B-it-Pruned-32-sft-750 | 750 | 32 | 1.3827 (- 0.2137) | 0.3240 (- 0.1438) |
| gemma-4-26B-A4B-it-Pruned-32-sft-1350 | 1350 | 32 | 1.2374 (- 0.359 ) | 0.2130 (- 0.2548) |
| gemma-4-26B-A4B-it-Pruned-32-sft-1800 | 1800 | 32 | 1.1724 (- 0.424 ) | 0.1590 (- 0.3088) |
| gemma-4-26B-A4B-it-Pruned-32-sft-2950 | 2950 | 32 | 1.0924 (- 0.504 ) | 0.0883 (- 0.3795) |
| gemma-4-26B-A4B-it-Pruned-32-sft-3550 | 2950 | 32 | 1.0645 (- 0.5319) | 0.0625 (- 0.4053) |
| gemma-4-26B-A4B-it-Pruned-32-sft-4150 | 4150 | 32 | 1.0532 (- 0.5432) | 0.0518 (- 0.416 ) |
| gemma-4-26B-A4B-it-Pruned-32-sft-4700 | 4700 | 32 | 1.0411 (- 0.5553) | 0.0403 (- 0.4275) |
| gemma-4-26B-A4B-it-Pruned-32-sft-7800 | 7800 | 32 | 1.0088 (- 0.5876) | 0.0088 (- 0.459 ) |
| gemma-4-26B-A4B-it-Pruned-32-sft-10900 | 10900 | 32 | 1.0035 (- 0.5929) | 0.0035 (- 0.4643) |
@misc{huihui4-8b-a4b,
title = {{Huihui4-8B-A4B}: A lightweight MoE (Mixture of Experts) conversational model},
author = {Huihui-ai},
year = {2026},
url = {https://hf.co/huihui-ai/Huihui4-8B-A4B}
}
If you have any questions, please raise an issue or contact us at support@huihui.ai.