Original Model Link : saricles/MiniMax-M2.7-REAP-172B-A10B-BF16

name: MiniMax-M2.7-REAP-172B-A10B-GGUF
base_model: MiniMaxAI/MiniMax-M2.7
license: other
pipeline_tag: text-generation
tasks: text-generation
language: en
library_name: llama.cpp
tags:
- Cerebras
- MiniMaxAI
- M2.7
- REAP
- GGUF
- static quantization

MiniMax-M2.7-REAP-172B-A10B-GGUF

This is a 230 billion parameter MiniMax M2.5 model with 25% of its experts pruned with REAP (Router-weighted Expert Activation Pruning), then converted to GGUF with llama.cpp and static quantized.

Command sequence using source version of llama.cpp from source and /opt/homebrew/Cellar/llama.cpp/8680 llama-quantize:

hf download saricles/MiniMax-M2.7-REAP-172B-A10B-BF16 --local-dir MiniMax-M2.7-REAP-172B-A10B-BF16
python -m convert_hf_to_gguf ~/Downloads/MiniMax-M2.7-REAP-172B-A10B-BF16
llama-quantize MiniMax-M2.5-REAP-172B-A10B-BF16.gguf Q4_K_M
Downloads last month
2,543
GGUF
Model size
173B params
Architecture
minimax-m2
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for exdysa/MiniMax-M2.7-REAP-172B-A10B-GGUF

Quantized
(95)
this model

Collection including exdysa/MiniMax-M2.7-REAP-172B-A10B-GGUF