Leaderboard Official Benchmark
Learn more Experimental
#MODELSCORE
zai-org/GLM-5
community source
1208190.0 *
2408822.0 *
390787.0 *
Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

YC-Bench

Long-horizon agent benchmark. The LLM plays CEO of an AI startup for 1 simulated year via CLI tool use against a deterministic discrete-event simulation.

Tests: employee allocation, prestige specialization, cash flow, deadline risk, adversarial client detection — sustained over hundreds of turns.

Source: github.com/collinear-ai/yc-bench

Evaluation

Download run_yc_bench_job.py from this repo, then:

hf jobs uv run run_yc_bench_job.py \
  --flavor cpu-basic --timeout 3h \
  --secrets OPENAI_API_KEY \
  -- openai/gpt-5.4

Or run locally: uv run run_yc_bench_job.py openai/gpt-5.4

Runs medium preset on seeds 1-3 and reports average final funds. Pass the appropriate --secrets flag for your provider (ANTHROPIC_API_KEY, OPENROUTER_API_KEY, etc). Any LiteLLM-compatible model string works.

Scoring

Average final funds (USD) across seeds 1, 2, 3. Bankrupt = $0.

score = average(max(0, final_funds_cents / 100) for each seed)

Submitting to leaderboard

Open a PR on the model's HF repo adding .eval_results/yc-bench.yaml. See sample_eval_result.yaml in this repo for the format.

License

Apache 2.0

Downloads last month
27

Space using collinear-ai/yc-bench 1