ChatTLA-20B (GGUF)
TBA - this model is currently in development, this is a public mirror to clear up space on my machine
Fine-tuned gpt-oss-20b for TLA+ formal specification generation.
Model Details
- Base model: openai/gpt-oss-20b (20.9B params, MoE ~3.6B active)
- Fine-tuning: LoRA r=16, alpha=32, all-linear targets, 10 epochs on 57 curated TLA+ examples
- Format: GGUF Q8_0 (22.3 GB)
- Training hardware: 2x Quadro RTX 8000 (96GB VRAM)
Usage with Ollama
# Download and register
ollama create chattla:20b -f Modelfile
# Generate a TLA+ spec
ollama run chattla:20b "Write a TLA+ spec for mutual exclusion"
Benchmark Results
| Metric | Base (gpt-oss:20b) | ChatTLA |
|---|---|---|
| Structural Score | 0.89-1.00 | 0.84-1.00 |
| SANY Parse | 0/5 | 0/5 |
| Output Quality | Mixed (sometimes not TLA+) | Consistent TLA+ structure |
- Downloads last month
- 48
Hardware compatibility
Log In to add your hardware
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for EricSpencer00/chattla-20b-gguf
Base model
openai/gpt-oss-20b