GGUF quants for Nitral-AI/CaptainErisNebula-12B-Chimera-v1.1's recipe.

Author recommended initial SillyTavern presets:

https://iili.io/KKtCMf2.md.jpg


This is an improvement on the previous experimental version.

  • Not "chaotic", and at a usable size for most people seeking to perform inference locally with good speeds.
  • The model does not show excessive alignment, so it should be good for most scenarios/writing situations.

  • Feel free to use some light system prompting to nudge it out of a blocker if needed.

  • It does well in adhering to characters and instructions.


Thank you so much, "crazy chef" and "mad scientist", Nitral!


# Using the latest llama.cpp ...
release version at the time: b6258.
# Imatrix was based on the full ...
FP16 precision GGUF.

START: BF16 HuggingFace Model
↓
(1) Conversion to Full-Precision GGUF
↓
FP16 GGUF (for Calibration Imatrix)
BF16 GGUF (for Quantization)
↓
(2) Generate Imatrix (from FP16 GGUF)
↓
imatrix.fp16.gguf
↓
(3) Quantize with Imatrix (using BF16 GGUF)
↓
Final Quantized GGUF Models
↓
END
Downloads last month
5,017
GGUF
Model size
12B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Lewdiculous/CaptainErisNebula-12B-Chimera-v1.1-GGUF-IQ-Imatrix

Quantized
(6)
this model

Collection including Lewdiculous/CaptainErisNebula-12B-Chimera-v1.1-GGUF-IQ-Imatrix