๐Ÿ’Ž GEMBRAIN-31B ๐Ÿง 

INSANE IN THE GEMBRAIN
ADHERENCEIMPROVED
SWIPE VARIETYINCREASED
CREATIVE PROSEPRESERVED

๐Ÿง  About The Model

Gembrain-31B is a synthesis of several models, including Gemsicle-31B as important ingredient. The goal of this release was to stabilize and improve the initial Gemsicle-31B, but also to enhance its logical and lateral thinking, both with and without reasoning.


It's build to create the most unhinged narratives and construct image prompts about anything accordingly to a given structure with high precision.


Expect creative swipe variance, unique and non-robotic prose, and sharper instruction adherence.

๐ŸŽš๏ธ Samplers

Temperature 1.0
Top-K 0
Top-P 0.95
Min-P 0.03
DRY Multiplier 0.8
DRY Base 1.75
DRY Allowed Length 10
Optional: Adaptive-P Target 0.6
Optional: Adaptive-P Decay 0.5

๐ŸซŸ GGUF Quants

Quant Size Download Link
Q4_K_S 17.8 GB Click
Q4_K_M 18.7 GB Click
Q5_K_S 21.3 GB Click
Q5_K_M 21.8 GB Click
Q6_K 25.2 GB Click
Q8_0 32.6 GB Click

๐Ÿ”ฎ Prompt Format

Please refer to the original google/gemma-4-31b-it for the correct chat template.

Let your frontend handle the chat template if possible (e.g., Chat Completion in SillyTavern).

For Reasoning: Add <|think|> at the very beginning of the system prompt. Thinking happens between <|channel>thought\n and <channel|> tags.

<|turn>system
<|think|>
You are a helpful assistant<turn|>
<|turn>user
Hello<turn|>
<|turn>model
Hi there<turn|>
<|turn>user
How are you?<turn|>
<|turn>model

๐Ÿงช Merge Details

This model was systematically created through a five-stage process of priming models for their given purpose and merging the results:

Phase 01: breadcrumbs_ties

Gemopus X MeroMero

models:
  - model: ./G4-MeroMero-31B
  - model: ./G4-Gemopus-4-31B-it
merge_method: breadcrumbs_ties
base_model: ./G4-31B-it
parameters:
  density: 0.85
  weight: 0.5
  int8_mask: true
dtype: bfloat16

Phase 02: slerp

GarnetV2 X Musica-v1

models:
  - model: ./G4-Gemma4-GarnetV2-31B
  - model: ./G4-31B-Musica-v1
merge_method: slerp
base_model: ./G4-Gemma4-GarnetV2-31B
parameters:
  t:
    - value: 0.6
dtype: bfloat16

Phase 03: della_linear

Gemsicle X Gemma-4-31B-it-heretic-ara

models:
  - model: ./Gemsicle-31B
    parameters:
    weight: 1.0
  - model: ./G4-gemma-4-31b-it-heretic-ara
    parameters:
      weight: 0.75
      density: 0.65
merge_method: della_linear
base_model: ./G4-31B-it
parameters:
  weight: 1.0
  normalize: false
  epsilon: 0.05
  lambda: 1.0
dtype: bfloat16

Phase 04: model_stock

Phase 01 X Phase 02 X Phase 03

models:
  - model: ./phase01_breadcrumbs_ties
  - model: ./phase02_slerp
merge_method: model_stock
base_model: ./phase03_della_linear
dtype: bfloat16
tokenizer_source: "base"

Phase 05: arcee_fusion

Gemsicle X Phase 04

models:
  - model: ./Gemsicle-31B
  - model: ./phase04_model_stock
merge_method: arcee_fusion
base_model: ./Gemsicle-31B
dtype: bfloat16
tokenizer_source: "base"

๐Ÿ† Credits & Honors

  • The Open-Source Community: For providing the brilliant base models and fine-tunes that made this synthesis possible.
  • The BeaverAI Community: The people on the BeaverAI Discord - Without your help I wouldn't do all that.
  • Mergekit: Once again thank you Arcee AI for the great and easy to use mergekit! And thanks to zerofota and their fork for Gemma 4 support.
  • Ateron: Big kudos for providing me with the first steps for merging models and your relentless testing and support.
  • Google Gemini: For once again helping me to craft this specific model card.
  • Downloads last month
    28
    Safetensors
    Model size
    31B params
    Tensor type
    F32
    ยท
    Inference Providers NEW
    This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

    Model tree for Nimbz/Gemma-4-Gembrain-31B