MiniMaxAI/SynLogic
Viewer • Updated • 49.3k • 619 • 104
How to use MiniMaxAI/SynLogic-Mix-3-32B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="MiniMaxAI/SynLogic-Mix-3-32B")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MiniMaxAI/SynLogic-Mix-3-32B")
model = AutoModelForCausalLM.from_pretrained("MiniMaxAI/SynLogic-Mix-3-32B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use MiniMaxAI/SynLogic-Mix-3-32B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "MiniMaxAI/SynLogic-Mix-3-32B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "MiniMaxAI/SynLogic-Mix-3-32B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/MiniMaxAI/SynLogic-Mix-3-32B
How to use MiniMaxAI/SynLogic-Mix-3-32B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "MiniMaxAI/SynLogic-Mix-3-32B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "MiniMaxAI/SynLogic-Mix-3-32B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "MiniMaxAI/SynLogic-Mix-3-32B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "MiniMaxAI/SynLogic-Mix-3-32B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use MiniMaxAI/SynLogic-Mix-3-32B with Docker Model Runner:
docker model run hf.co/MiniMaxAI/SynLogic-Mix-3-32B
Zero-Mix-3 is an advanced multi-domain reasoning model trained using Zero-RL (reinforcement learning from scratch) on a diverse mixture of logical reasoning, mathematical, and coding data. Built on Qwen2.5-32B-Base, this model demonstrates the power of combining diverse verifiable reasoning tasks in a unified training framework.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "MiniMaxAI/SynLogic-Mix-3-32B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
prompt = "What is 2 + 2?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
| Model | BBEH | KOR-Bench | LiveCodeBench | AIME 2024 | GPQA Diamond |
|---|---|---|---|---|---|
| DeepSeek-R1-Distill-Qwen-32B | 19.2 | 66.6 | 57.2 | 72.6 | 63.1 |
| DeepSeek-R1-Zero-Qwen-32B | - | - | 40.2 | 47.0 | 55.0 |
| Zero-Mix-2 (Math+Coding) | 18.5 | 58.6 | 39.5 | 34.5 | 55.2 |
| Zero-Mix-3 (SynLogic+Math+Coding) | 28.6 | 65.0 | 40.7 | 35.8 | 57.5 |
Key Achievements:
Comparison with Zero-Mix-2 (Math+Coding only) demonstrates that adding SynLogic logical reasoning data:
@misc{liu2025synlogic,
title={SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond},
author={Junteng Liu and Yuanxiang Fan and Zhuo Jiang and Han Ding and Yongyi Hu and Chi Zhang and Yiqi Shi and Shitong Weng and Aili Chen and Shiqi Chen and Yunan Huang and Mozhi Zhang and Pengyu Zhao and Junjie Yan and Junxian He},
year={2025},
eprint={2505.19641},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2505.19641},
}