Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
cerebras
/
GLM-4.6-REAP-252B-A32B-FP8
like
6
Follow
Cerebras
1.44k
Text Generation
Transformers
Safetensors
English
glm4_moe
glm
MOE
pruning
compression
conversational
compressed-tensors
arxiv:
2510.13999
License:
mit
Model card
Files
Files and versions
xet
Community
3
Deploy
Use this model
A12b?
#1
by
Lockout
- opened
Oct 23, 2025
Discussion
Lockout
Oct 23, 2025
You also pruned the active parameters? That can't be good.
See translation
lazarevich
Cerebras org
Oct 23, 2025
32B :)
lazarevich
changed discussion status to
closed
Oct 23, 2025
Edit
Preview
Upload images, audio, and videos by dragging in the text input, pasting, or
clicking here
.
Tap or paste here to upload images
Comment
·
Sign up
or
log in
to comment