ArgonneAI
Collection
Pretrained LLMs from scratch. • 8 items • Updated • 1
How to use PursuitOfDataScience/Argonne-2.5-ctx13568 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="PursuitOfDataScience/Argonne-2.5-ctx13568")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("PursuitOfDataScience/Argonne-2.5-ctx13568", dtype="auto")How to use PursuitOfDataScience/Argonne-2.5-ctx13568 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "PursuitOfDataScience/Argonne-2.5-ctx13568"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "PursuitOfDataScience/Argonne-2.5-ctx13568",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/PursuitOfDataScience/Argonne-2.5-ctx13568
How to use PursuitOfDataScience/Argonne-2.5-ctx13568 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "PursuitOfDataScience/Argonne-2.5-ctx13568" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "PursuitOfDataScience/Argonne-2.5-ctx13568",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "PursuitOfDataScience/Argonne-2.5-ctx13568" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "PursuitOfDataScience/Argonne-2.5-ctx13568",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use PursuitOfDataScience/Argonne-2.5-ctx13568 with Docker Model Runner:
docker model run hf.co/PursuitOfDataScience/Argonne-2.5-ctx13568
Argonne-2.5-ctx13568 is a long-context continuation of PursuitOfDataScience/Argonne2.5-base.
| Component | Specification |
|---|---|
| Parameters | 1,273,807,360 |
| Layers | 28 transformer blocks |
| Hidden size | 1,792 |
| Attention heads | 14 query / 7 key-value (GQA) |
| Context length | 13,568 tokens |
| Vocabulary size | 151,669 |
| Position encoding | RoPE (θ = 10,000) |
| Item | Value |
|---|---|
| Start checkpoint | PursuitOfDataScience/Argonne2.5-base |
| Long-context tokens trained | 16.0B tokens |
| Final cumulative tokens | 92,050,960,384 |
| Batch size per GPU | 4 |
| Gradient accumulation | 1 |
| Effective batch | 108,544 tokens |
| Precision | bf16 autocast |
| Checkpoint dtype | bfloat16 |
| Weight format | 3 sharded safetensors |
16k-32k LongMino poolThis model uses the Qwen3 tokenizer family via the Qwen2Tokenizer compatibility class.
The release was built from the GitHub main branch codebase: https://github.com/PursuitOfDataScience/ArgonneAI/tree/main
Key scripts:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "PursuitOfDataScience/Argonne-2.5-ctx13568"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
dtype=torch.bfloat16,
)
prompt = "Write a short paragraph about scientific computing at Argonne National Laboratory."
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to(model.device)
output_ids = model.generate(
input_ids,
max_length=input_ids.shape[1] + 256,
temperature=0.8,
top_p=0.9,
top_k=50,
do_sample=True,
)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
trust_remote_code=True.max_position_embeddings is 13,568.@misc{argonne25ctx13568,
author = {PursuitOfDataScience},
title = {Argonne-2.5-ctx13568},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/PursuitOfDataScience/Argonne-2.5-ctx13568}
}