MiniMaxAI
Collection
5 items β’ Updated
Original Model Link : saricles/MiniMax-M2.7-REAP-172B-A10B-BF16
name: MiniMax-M2.7-REAP-172B-A10B-GGUF
base_model: MiniMaxAI/MiniMax-M2.7
license: other
pipeline_tag: text-generation
tasks: text-generation
language: en
library_name: llama.cpp
tags:
- Cerebras
- MiniMaxAI
- M2.7
- REAP
- GGUF
- static quantization
This is a 230 billion parameter MiniMax M2.5 model with 25% of its experts pruned with REAP (Router-weighted Expert Activation Pruning), then converted to GGUF with llama.cpp and static quantized.
Command sequence using source version of llama.cpp from source and /opt/homebrew/Cellar/llama.cpp/8680 llama-quantize:
hf download saricles/MiniMax-M2.7-REAP-172B-A10B-BF16 --local-dir MiniMax-M2.7-REAP-172B-A10B-BF16
python -m convert_hf_to_gguf ~/Downloads/MiniMax-M2.7-REAP-172B-A10B-BF16
llama-quantize MiniMax-M2.5-REAP-172B-A10B-BF16.gguf Q4_K_M
3-bit
4-bit
5-bit
6-bit
8-bit
Base model
MiniMaxAI/MiniMax-M2.7