Different sizes of same quants?

#2
by inputout - opened

I was wondering why same quants yield different sizes of gguf. Just out of interest, why is that? (differences up to 1,3 GB !!)
255,3 GB Q5_K_M @bartowski
254,2 GB Q5_K_M unsloth
254,0 GB Q5_K_M mradermacher

My quants use this fork for the quantization resulting in slightly different layouts of quantization by layer for MoE models:

https://github.com/ggml-org/llama.cpp/pull/12727

Unsloth uses something else, not sure if it's public

mradermacher uses mainline afaik

Interesting, I had assumed it was related to different imatrixes, but then there are even more fundamental reasons.
Do you think that under these circumstances, the ppl of the quants are still comparable as a metric?
I found it exciting that in a test, your PPLs were better than the others on my own “uncontaminated” text, but the others were better on Wikitest, as if the imatrix were “benchmaxxed” on Wikitest (if that's even technically possible). It could also be a coincidence.

@inputout if the imatrix is calibrated on WikiText, it will get better PPL results on WikiText, which is why it's generally a good idea to do PPL on a different dataset than the one used to calibrate the imatrix.

yeah imatrix itself won't change the actual size of the result, it only affects the scales and offsets chosen while quantizing

but as ilintar said, using the same dataset for imatrix and PPL can result in minor advantages to the result, which is why your use of an "uncontaminated" text is most likely to be the most accurate results

do you mind sharing your results? I'm highly curious

Sign up or log in to comment