J.A.R.V.I.S. β Alvin OS Personal AI Assistant (MVP)
A locally-run, multi-agent AI assistant built on the Alvin OS reasoning pipeline. J.A.R.V.I.S. reasons through a 12-step swarm-orchestrated pipeline, remembers every turn, and speaks with dry wit and absolute loyalty to Alvin.
Quick Start
# 1. Install dependencies
pip install -r requirements.txt
# 2. Ensure Ollama is running with a model (e.g., llama3.1)
ollama pull llama3.1
# 3. Run
python main.py
Architecture (12-Step Pipeline)
| Step | Name | MVP Status |
|---|---|---|
| 1 | State Reader | β Implemented |
| 2 | Contract Classifier | β Implemented |
| 3 | Dial Setting | β Implemented |
| 4 | Archetype Blender | β Implemented |
| 5 | J.A.R.V.I.S. Persona Injection | β Implemented |
| 6 | Council of Three | β Implemented |
| 7 | Self-Scoring Agent | β³ Stub (disabled in config) |
| 8 | Mirror Check (Drift Guard) | β³ Stub (disabled in config) |
| 9 | [ANCHOR] Gate | β³ Stub (disabled in config) |
| 10 | Memory Writer | β Implemented |
| 11 | Proactive Loop Detector | β³ Stub (disabled in config) |
| 12 | Response Delivery | β Implemented |
Tech Stack
- Orchestration: smolagents (Hugging Face)
- LLM Backend: Ollama (local) with optional OpenAI/Anthropic fallback
- Memory: ChromaDB (vector) + SQLite (structured)
- UI: Terminal (Rich), FastAPI+HTML later
- Language: Python 3.11+
Configuration
Edit config.yaml or set env vars:
llm:
provider: ollama # or openai / anthropic
model: llama3.1
base_url: http://localhost:11434/v1
Environment overrides:
JARVIS_LLM_PROVIDERJARVIS_LLM_MODELJARVIS_LLM_API_KEYJARVIS_LLM_BASE_URL
Project Structure
jarvis/
__init__.py
config.py # YAML/JSON config dataclasses
llm.py # Pluggable LLM backends
memory.py # SQLite + ChromaDB unified memory
pipeline.py # 12-step Alvin OS pipeline
ui.py # Rich terminal interface
main.py # Entry point
config.yaml # Default configuration
requirements.txt # Dependencies
Handoff Notes
- Single-user system: assumes the user is Alvin.
- Privacy-first: all data on-disk, no telemetry.
- Modular pipeline: each step is a function; easy to swap orchestration frameworks.
- Future sprints will add self-scoring, mirror check, proactive loops, and a web UI.
- Downloads last month
- 21
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support