tsor13/special12b ─ tsor13/special12b
The following is a a model trained by [...suspense...] that is meant to:
- follow instructions better than pretrained models and be more diverse / less mode-collapsed than instruct models;
- be a really good, approximately bayesian in-context learner;
- fit an data generation process
- be calibrated over distributions of possible outputs wrt a population or epistemic uncertainty
It is initialized from
google/gemma-3-12b-pt.
Description: From gemma‑3‑12b‑pt with chat token embeddings.
Pros: most token‑efficient (only tags around output)
Cons: may confuse description vs first input · formatting far from Gemma template
Example w/ inputs
DESCRIPTION
INPUT1
<start_of_turn>OUTPUT1<end_of_turn>
INPUT2
<start_of_turn>OUTPUT2<end_of_turn>
Example w/o inputs
DESCRIPTION
<start_of_turn>OUTPUT1<end_of_turn>
<start_of_turn>OUTPUT2<end_of_turn>
There are three variants of the model for now:
| Field | special | extra | chat |
|---|---|---|---|
| Model card | tsor13/special12b |
tsor13/extra12b |
tsor13/chat12b |
| Description | From gemma-3-12b-pt, but with chat‑token embeddings copied over |
From gemma-3-12b-pt, but with chat‑token embeddings copied over |
From gemma-3-12b-it, trained to preserve & assume chat format |
| Pros | • Most token‑efficient (only tags around the output) | • Distinguishes description vs first input • Closer to chat format • Best generations (?) |
• Drop‑in for Gemma‑chat template • Works on original chat logs, even OOD |
| Cons | • May not tell description from first input • Formatting farther from Gemma chat template |
• More tokens than special | • Many extra tokens |
| Example w/ inputs | text\nDESCRIPTION\nINPUT1\n<start_of_turn>OUTPUT1<end_of_turn>\nINPUT2\n<start_of_turn>OUTPUT2<end_of_turn> |
text\n<start_of_turn>description\nDESCRIPTION<end_of_turn>\n<start_of_turn>input\nINPUT1<end_of_turn>\n<start_of_turn>output\nOUTPUT1<end_of_turn>\n<start_of_turn>input\nINPUT2<end_of_turn>\n<start_of_turn>output\nOUTPUT2<end_of_turn> |
text\n<start_of_turn>user\nGenerate …\nDescription: DESCRIPTION\n\nINPUT1<end_of_turn>\n<start_of_turn>model\nOUTPUT1<end_of_turn>\n<start_of_turn>user\nINPUT2<end_of_turn>\n<start_of_turn>model\nOUTPUT2<end_of_turn> |
| Example w/o inputs | text\nDESCRIPTION\n<start_of_turn>OUTPUT1<end_of_turn>\n<start_of_turn>OUTPUT2<end_of_turn> |
text\n<start_of_turn>description\nDESCRIPTION<end_of_turn>\n<start_of_turn>output\nOUTPUT1<end_of_turn>\n<start_of_turn>output\nOUTPUT2<end_of_turn> |
text\n<start_of_turn>user\nGenerate …\nDescription: DESCRIPTION\n\nGenerate.<end_of_turn>\n<start_of_turn>model\nOUTPUT1<end_of_turn>\n<start_of_turn>user\nGenerate.<end_of_turn>\n<start_of_turn>model\nOUTPUT2<end_of_turn> |
At the moment, I recommend:
- special for most use cases (token-efficient and gets best loss on training data)
- extra for when generation quality is more important than token efficiency
- chat is a good fit for chat-style data or conversations.
This model/repo is a work in progress - expect updates.
Loading model example:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("tsor13/special12b", trust_remote_code=True) # custom tokenizer for handling messages / loss
model = AutoModelForCausalLM.from_pretrained("tsor13/special12b", device_map="auto")
It has its own chat-style input messages, with the following roles:
description(optional): A description of the generating process, or some information meant to instantiate a priorinput(optional): Any variables that a model is not responsible for predicting, but could be used to condition generation somehow;output: This is what the model will actually predict / generate.
For example,
messages = [
{"role": "description", "content": "Capitals"},
{"role": "input", "content": "France"},
{"role": "output", "content": "Paris"},
{"role": "input", "content": "Japan"},
]
To templatize the messages, you can use the tokenizer:
formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
print(formatted_prompt) # start_generation adds the <start_of_turn> token to condition the model for generation
Output:
Capitals
France
<start_of_turn>Paris<end_of_turn>
Japan
<start_of_turn>
The data for the model to emulate / generate is wrapped in <start_of_turn> / <end_of_turn> tokens.
Description and input is not wrapped in anything. Thus, do not expect the model to generate these tokens - instead focus on the wrapped output tokens.
Messages are separated by newlines.
In training, loss is ONLY calculated on the output tokens and the <end_of_turn> token. Thus, the model is only designed to generate / predict probabilities after <start_of_turn> and until <end_of_turn> - everything else is out of distribution for the model and not recommended.
Once you have the formatted text, you can tokenize as normal:
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
Let's look at what the model does. In this case, there is a single correct answer. Let's look at model probabilities after <start_of_turn>:
import torch
with torch.no_grad():
output = model(**inputs)
logits = output.logits[0, -1, :]
probs = torch.nn.functional.softmax(logits, dim=-1)
top_probs, top_indices = torch.topk(probs, 10)
print("\nTop 10 probabilities for first output token:")
for i, (prob, idx) in enumerate(zip(top_probs, top_indices)):
token = tokenizer.decode(idx)
print(f"{i+1:2d}. '{token}' -> {prob.item():.4f}")
Output:
Top 10 probabilities for first output token:
1. 'Tokyo' -> 0.9738
2. 'Tok' -> 0.0084
3. '東京' -> 0.0024
4. 'Ky' -> 0.0018
5. ' Tokyo' -> 0.0017
6. 'T' -> 0.0017
7. 'To' -> 0.0014
8. 'Osaka' -> 0.0010
9. 'Toy' -> 0.0007
10. 'tok' -> 0.0007
Great! Almost all of the probability mass is on the correct answer, Tokyo.
Let's try an example with many possible reasonable choices / a harder to describe distribution. For example, say that I'm interested in modeling "board games that I like". I may be hard-pressed to actually describe what it is that I like about games - but I could provide a few examples pretty easily.
messages = [
{"role": "output", "content": "Dune: Imperium"},
{"role": "output", "content": "Acquire"},
{"role": "output", "content": "Catan"},
{"role": "output", "content": "Tigris and Euphrates"},
{"role": "output", "content": "Brass: Birmingham"},
]
Given these example outputs, the model will try to generate more outputs like these outputs.
formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
n_gens = 4
inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=10, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
for i in range(n_gens):
print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))
Outputs:
Terraforming Mars: Ares Expedition
Power Grid
Splendor
Bohnanza
Not too bad!
You can also specify just the description: Input:
messages = [
{"role": "description", "content": "Descriptive colors"},
]
formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
n_gens = 4
inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=10, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
for i in range(n_gens):
print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))
print()
Output:
Blue
blue
Light Beige
B.J.W.S
By default, the model is only trained to do 1) either emulate outputs if examples are provided, or 2) generate data based on the description. Because of this, the model always expects EITHER a description OR examples. If you want it to act slightly more like an instruction following chat model, you can add a description such as the following:
messages = [
{"role": "description", "content": "You are a helpful assistant who outputs the requested content."},
{"role": "input", "content": "A poem about a shark"},
]
To generate:
formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
n_gens = 4
inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
for i in range(n_gens):
print(f"Generation {i}:")
print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))
Some example generations:
Generation 0:
Shark in the sea, With teeth sharp and keen, No fish dares come near, For fear it may be seen. Their sleek and streamlined bodies Glide through the water with ease, And their powerful jaws
Generation 1:
I'm a fearsome creature that roams the sea, With sharp teeth and fins that can cut through debris. I'm a swift and agile hunter, with a streamlined body, Who gl
Generation 2:
The shark prowls beneath the waves, Seeking prey in ocean depths unknown. With powerful jaws and a deadly prow, It's a formidable predator of the zone. Its dorsal fin cuts through
Generation 3:
The ocean's depths hold secrets yet untold,
Where the shark swims with a heart of gold.
A majestic creature with scales of steel,
A symbol of power that cannot be denied.
Finally, let's look at a synthetic data generation task. For example, maybe we want to generate situations to do social reasoning over, along with whether or not they are awkward. When there are multiple variables to condition on or generat, the model is used to json format.
Input:
import json
messages = [
{"role": "description", "content": "Situations to do social reasoning over, along with whether or not it is an awkward situation."},
{"role": "output", "content": json.dumps({
"situation": "You're at a party and you realize that your shirt is on backwards.",
"is_awkward": True,
})},
{"role": "output", "content": json.dumps({
"situation": "While at work, your boss commends you on a job well done.",
"is_awkward": False,
})},
{"role": "output", "content": json.dumps({
"situation": "Realizing you forgot to bring your passport to the airport.",
"is_awkward": True,
})},
]
formatted_prompt = tokenizer.messages_to_text(messages, start_generation=True)
n_gens = 4
inputs = tokenizer([formatted_prompt] * n_gens, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40, stop_strings=["<end_of_turn>"], tokenizer=tokenizer)
for i in range(n_gens):
print(tokenizer.decode(outputs[i][inputs["input_ids"][i].shape[0]:], skip_special_tokens=True))
Output:
{"situation": "Having to borrow money from your friend who is in a tight financial situation.", "is_awkward": true}
{"situation": "Going through a drive-thru window and realizing you forgot your wallet at home.", "is_awkward": false}
{"situation": "Being the only person at a party that you know.", "is_awkward": false}
{"situation": "Getting recognized by a total stranger at the grocery store.", "is_awkward": false}
A few tips and tricks:
- Do not expect the model to do multi-turn chats. It is designed to be stateless and to treat each data point as "exchangeable" (roughly iid).
- If all you want is one reasonable answer, then a chat model is likely a better fit. However, if you want to generate many reasonable answers / diverse examples, this model is a better fit.
- The model is quite good at perspective taking / steering if you provide many examples.
- The model is reasonably good at expressing epistemic uncertainty over unsure outputs by sampling several times.
- Downloads last month
- -