J.A.R.V.I.S. β€” Alvin OS Personal AI Assistant (MVP)

A locally-run, multi-agent AI assistant built on the Alvin OS reasoning pipeline. J.A.R.V.I.S. reasons through a 12-step swarm-orchestrated pipeline, remembers every turn, and speaks with dry wit and absolute loyalty to Alvin.

Quick Start

# 1. Install dependencies
pip install -r requirements.txt

# 2. Ensure Ollama is running with a model (e.g., llama3.1)
ollama pull llama3.1

# 3. Run
python main.py

Architecture (12-Step Pipeline)

Step Name MVP Status
1 State Reader βœ… Implemented
2 Contract Classifier βœ… Implemented
3 Dial Setting βœ… Implemented
4 Archetype Blender βœ… Implemented
5 J.A.R.V.I.S. Persona Injection βœ… Implemented
6 Council of Three βœ… Implemented
7 Self-Scoring Agent ⏳ Stub (disabled in config)
8 Mirror Check (Drift Guard) ⏳ Stub (disabled in config)
9 [ANCHOR] Gate ⏳ Stub (disabled in config)
10 Memory Writer βœ… Implemented
11 Proactive Loop Detector ⏳ Stub (disabled in config)
12 Response Delivery βœ… Implemented

Tech Stack

  • Orchestration: smolagents (Hugging Face)
  • LLM Backend: Ollama (local) with optional OpenAI/Anthropic fallback
  • Memory: ChromaDB (vector) + SQLite (structured)
  • UI: Terminal (Rich), FastAPI+HTML later
  • Language: Python 3.11+

Configuration

Edit config.yaml or set env vars:

llm:
  provider: ollama      # or openai / anthropic
  model: llama3.1
  base_url: http://localhost:11434/v1

Environment overrides:

  • JARVIS_LLM_PROVIDER
  • JARVIS_LLM_MODEL
  • JARVIS_LLM_API_KEY
  • JARVIS_LLM_BASE_URL

Project Structure

jarvis/
  __init__.py
  config.py       # YAML/JSON config dataclasses
  llm.py          # Pluggable LLM backends
  memory.py       # SQLite + ChromaDB unified memory
  pipeline.py     # 12-step Alvin OS pipeline
  ui.py           # Rich terminal interface
main.py           # Entry point
config.yaml       # Default configuration
requirements.txt  # Dependencies

Handoff Notes

  • Single-user system: assumes the user is Alvin.
  • Privacy-first: all data on-disk, no telemetry.
  • Modular pipeline: each step is a function; easy to swap orchestration frameworks.
  • Future sprints will add self-scoring, mirror check, proactive loops, and a web UI.
Downloads last month
21
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support