Sago is a planning and control-plane tool, not a coding agent. It turns your project idea into a structured, verified plan, tracks progress across phases, and then gets out of the way so a real coding agent can do the building.
You → sago init → sago plan → coding agent builds Phase 1 → sago replan → coding agent builds Phase 2 → ...
Why? AI coding agents (Claude Code, Codex, Cursor, etc.) are excellent at writing code but bad at planning entire projects from scratch. They lose track of requirements, skip steps, and produce inconsistent architectures. Sago owns the spec, plan, and phase gates so the coding agent can focus on writing code.
Sago does not execute project tasks itself. The intended workflow is:
- Sago defines the work (
PROJECT.md,REQUIREMENTS.md,PLAN.md) - Sago reads repo-local agent context (
IMPORTANT.md,AGENTS.md,SKILLS.md,CLAUDE.md,.cursorrules) when present - Your coding agent executes the work
- Sago records state, reviews completed phases, and updates the plan
- Quick start
- How it works
- Using with Claude Code
- Using with other agents
- Mission control
- Trace Events
- Commands
- Configuration
- Task format
- Why sago
- Sago vs GSD
- Development
- Acknowledgements
- License
pip install -e .Requires Python 3.11+.
Create a .env file (or export the variables):
LLM_PROVIDER=openai
LLM_MODEL=gpt-4o
LLM_API_KEY=sk-your-key-hereAny LiteLLM-supported provider works — OpenAI, Anthropic, Azure, Gemini, etc. The LLM is used for planning and review, not for task execution.
For ChatGPT subscription access via LiteLLM, use the ChatGPT route model (OAuth device flow, no API key required):
LLM_PROVIDER=chatgpt
LLM_MODEL=chatgpt/gpt-5.3-codexsago initSago prompts for a project name and description. It generates the project scaffold:
my-project/
├── PROJECT.md ← Vision, tech stack, architecture
├── REQUIREMENTS.md ← What the project must do
├── PLAN.md ← Atomic tasks with verify commands (after sago plan)
├── STATE.md ← Progress log (updated as tasks complete)
├── CLAUDE.md ← Instructions for the coding agent
├── IMPORTANT.md ← Rules the coding agent must follow
└── .planning/ ← Runtime artifacts (cache, traces)
If you provide a description during init, the AI generates PROJECT.md and REQUIREMENTS.md for you. Otherwise, fill them in yourself.
sago planSago reads your PROJECT.md and REQUIREMENTS.md, detects your environment (Python version, OS, platform), and generates a PLAN.md with:
- Atomic tasks grouped into phases
- Task-level dependency ordering via
depends_on - Verification commands for each task
- A list of third-party packages needed
- Semantic validation (duplicate IDs, dependency cycles, missing fields, etc.)
Sago validates the plan automatically — if it finds structural errors (cycles, invalid dependencies, missing task IDs), it retries once with error feedback. You're shown validation results and asked to accept or reject before the plan is written.
Use sago plan --yes for fully non-interactive plan generation. It now skips both the final accept/reject prompt and the placeholder-content warning that normally protects untouched template files.
Point your coding agent at the project and tell it to follow the plan:
Claude Code:
cd my-project
claude
# Claude Code reads CLAUDE.md automatically and follows the planCursor / Other agents:
Open the project directory. The agent should read PLAN.md and execute tasks in order, running each <verify> command and then sago checkpoint to record progress.
In a separate terminal, launch mission control:
sago watchThis opens a live dashboard in your browser that shows task completion, file activity, and phase progress — updated every second as your coding agent works through the plan.
After your coding agent finishes a phase, run the phase gate:
sago replanThis reviews completed phases only, shows findings (warnings, suggestions), saves the review to STATE.md, shows actionable recommendations (e.g. "task failed 2+ times — consider replanning"), and optionally lets you adjust the plan before the next phase. Just press Enter to skip replanning if the review looks good.
sago status # quick summary + recommendations
sago status -d # detailed per-task breakdown
sago lint-plan # validate plan without running anything
sago doctor # check project + environment health┌─────────────────────────────────────────────────────┐
│ 1. SPEC │
│ You write PROJECT.md + REQUIREMENTS.md │
│ (or describe your idea and sago generates them) │
└──────────────────────┬──────────────────────────────┘
▼
┌─────────────────────────────────────────────────────┐
│ 2. PLAN (sago) │
│ Sago calls an LLM to generate PLAN.md: │
│ - Atomic tasks with verification commands │
│ - Task-level dependencies (depends_on DAG) │
│ - Environment-aware (Python version, OS) │
│ - Lists required third-party packages │
└──────────────────────┬──────────────────────────────┘
▼
┌─────────────────────────────────────────────────────┐
│ 3. BUILD (your coding agent) │
│ Claude Code / Cursor / Aider executes tasks: │
│ - Runs sago next to get the next task │
│ - Follows <action> instructions │
│ - Runs <verify> commands │
│ - Runs sago checkpoint to record progress │
└──────────────────────┬──────────────────────────────┘
▼
┌─────────────────────────────────────────────────────┐
│ 4. REVIEW (sago replan) │
│ Between phases, reviews completed work: │
│ - Runs ReviewerAgent on finished phases │
│ - Shows warnings, suggestions, issues │
│ - Saves review to STATE.md │
│ - Optionally updates the plan with feedback │
└──────────────────────┬──────────────────────────────┘
▼
(repeat 3→4 for each phase)
▼
┌─────────────────────────────────────────────────────┐
│ 5. TRACK (sago) │
│ sago status shows progress │
│ Dashboard shows real-time updates │
└─────────────────────────────────────────────────────┘
Sago is the project manager. Your coding agent is the developer. The markdown files are the contract between them.
Sago generates a CLAUDE.md file during sago init that Claude Code reads automatically. It tells Claude Code how to follow the plan, execute tasks in order, and record progress via sago checkpoint.
sago init my-project --prompt "A weather dashboard with FastAPI and PostgreSQL"
cd my-project
sago plan
claudeClaude Code picks up CLAUDE.md on startup and understands the task format. The agent runs sago next to get its assignment, executes it, then calls sago checkpoint to record progress. When all tasks in a phase are done, sago automatically detects it and prompts the agent to run sago replan.
sago init my-project --prompt "A weather dashboard with FastAPI and PostgreSQL"
cd my-project
sago planCopy the sago workflow instructions into Cursor's rules file so the agent knows how to work:
cp CLAUDE.md .cursorrulesThen open the project in Cursor and use Agent mode. Tell it:
"Run
sago nextto get the next task. Execute it, run the verify command, then runsago checkpointto record progress. Repeat."
Cursor's agent will follow the plan the same way Claude Code does.
sago init my-project --prompt "A weather dashboard with FastAPI and PostgreSQL"
cd my-project
sago planFeed the plan and project context to Aider:
aider --read PLAN.md --read PROJECT.md --read REQUIREMENTS.mdThen tell it which task to work on:
"Execute task 1.1 from PLAN.md. Create the files listed, follow the action instructions, then run the verify command."
Work through tasks one at a time since Aider works best with focused, single-task instructions.
Sago's output is just markdown files. Any coding agent that can read files and run commands works. The agent needs to:
- Read
PROJECT.md— the project vision, tech stack, and architecture - Read
REQUIREMENTS.md— what the project must do - If
PLAN.mdhas a<dependencies>block, install those packages first - Run
sago nextto get the next task — it shows the task details, dependencies, and context - Run each task's
<verify>command — it must exit 0 before moving on - Run
sago checkpoint <task_id>after each task to record progress in STATE.md - Repeat from step 4 until all tasks are complete
The CLAUDE.md file generated by sago init contains these instructions in a format most agents understand. Rename or copy it to whatever your agent expects (.cursorrules, .github/copilot-instructions.md, etc.).
While your coding agent builds the project, run mission control in a separate terminal:
sago watch # launch dashboard (auto-opens browser)
sago watch --port 8080 # use a specific port
sago watch --path ./my-app # point to a different projectThe dashboard shows:
- Overall progress — progress bar with task count and percentage
- Phase tree — every phase and task with live status icons (done, failed, pending)
- File activity — new and modified files detected in the project directory
- Dependencies — packages listed in PLAN.md
- Per-phase progress bars — at a glance, which phases are done
It polls STATE.md every second — as sago checkpoint records task results, the dashboard updates automatically. No extra dependencies (stdlib HTTP server + os.stat). Mission control also reads trace data from the target project's own .planning/trace.jsonl, so sago watch --path ./other-project no longer leaks runtime artifacts into your current shell directory.
Mission control already includes a Trace tab. To capture plan/replan events, enable tracing in your environment:
ENABLE_TRACING=trueThen run sago plan or sago replan, followed by:
sago watchThe dashboard will read .planning/trace.jsonl and show the live event feed when trace data is present. Trace spans now keep stable span_id values across paired *_start / *_end events, which makes the feed easier to consume from external tooling.
sago init # interactive: prompts for name + description
sago init [name] # quick scaffold with templates
sago init [name] --prompt "desc" # generate spec files from a prompt via LLM
sago init -y # non-interactive, all defaults
sago plan # generate PLAN.md from requirements
sago plan --yes # auto-accept plan without confirmation
sago checkpoint 1.1 --notes "done" # record task completion in STATE.md
sago checkpoint 1.2 -s failed -n "import error" # record failure
sago checkpoint 2.1 -d "Chose JWT" # record with key decision
sago next # show next actionable task with full details
sago lint-plan # validate PLAN.md for structural/semantic issues
sago lint-plan --strict # treat warnings as errors
sago lint-plan --json # machine-readable JSON output
sago doctor # run project and environment diagnostics
sago judge # configure the judge/reviewer model
sago replan # phase gate: review completed work, optionally update plan
sago watch # launch mission control dashboard
sago watch --port 8080 # use a specific port
sago status # show project progress + recommendations
sago status -d # detailed per-task breakdown| Flag | What it does |
|---|---|
--force / -f |
Regenerate PLAN.md if it already exists |
--yes / -y |
Fully non-interactive plan generation: skips the placeholder warning and the final confirmation prompt |
Create a .env file in your project directory:
LLM_PROVIDER=openai
LLM_MODEL=gpt-4o
LLM_API_KEY=your-key-here
LLM_TEMPERATURE=0.1
LLM_MAX_TOKENS=4096
LOG_LEVEL=INFOAny LiteLLM-supported provider works. Set LLM_MODEL to the provider's model identifier (e.g., claude-sonnet-4-5-20250929, gpt-4o, gemini/gemini-2.0-flash).
ChatGPT subscription mode is also supported through LiteLLM:
LLM_PROVIDER=chatgpt
LLM_MODEL=chatgpt/gpt-5.3-codex
# No LLM_API_KEY required; LiteLLM handles OAuth device flow and token storage.Tasks in PLAN.md use XML inside markdown:
<phases>
<dependencies>
<package>flask>=2.0</package>
<package>sqlalchemy>=2.0</package>
</dependencies>
<review>
Review instructions for post-phase code review...
</review>
<phase name="Phase 1: Setup">
<task id="1.1">
<name>Create config module</name>
<files>src/config.py</files>
<action>Create configuration with pydantic settings...</action>
<verify>python -c "import src.config"</verify>
<done>Config module imports successfully</done>
</task>
<task id="1.2" depends_on="1.1">
<name>Add database layer</name>
<files>src/db.py</files>
<action>Create database module using config...</action>
<verify>python -c "import src.db"</verify>
<done>Database module imports successfully</done>
</task>
</phase>
</phases><dependencies>— third-party packages needed, with version constraints<review>— instructions for reviewing each phase's output<task>— atomic unit of work with files, action, verification, and done criteriadepends_on— optional attribute on<task>listing comma-separated task IDs this task depends on. Omit it to depend on all prior tasks in the phase (sequential by default). Use it to express that a task has no dependencies or only specific ones, enabling parallel execution of independent tasks.
The planning problem. AI coding agents are great at writing code for a well-defined task. But ask them to build an entire project from a vague description and they lose track of requirements, skip steps, pick incompatible dependencies, and produce inconsistent architectures. The gap isn't in code generation — it's in project planning.
Sago fills that gap. It uses an LLM to generate a structured, verified plan with atomic tasks, dependency ordering, and environment-aware dependency suggestions. Then it hands off to whatever coding agent you prefer.
Model-agnostic planning. Sago uses LiteLLM for plan generation, so you're not locked into any provider. Use OpenAI, Anthropic, Azure, Gemini, Mistral — whatever gives you the best plans.
Agent-agnostic execution. Sago doesn't care what builds the code. Claude Code, Cursor, Aider, Copilot, a human — anything that can read markdown and follow instructions. Sago generates the plan; you choose the builder.
Spec-first, always. Every sago project has a reviewable spec (PROJECT.md, REQUIREMENTS.md) and a reviewable plan (PLAN.md) before any code is written. You see exactly what will be built and can adjust before spending time or tokens on execution.
GSD (Get Shit Done) is a great project that inspired sago. Both solve the same core problem — AI coding agents are bad at planning — but they take different approaches.
| Sago | GSD | |
|---|---|---|
| What it is | Standalone CLI tool (pip install) |
Prompt system loaded into Claude Code |
| Coding agent | Any — Claude Code, Cursor, Aider, Copilot, a human | Claude Code only (uses its sub-agent spawning) |
| Planning LLM | Any LiteLLM provider (OpenAI, Anthropic, Gemini, etc.) | Claude (via Claude Code) |
| Execution | You hand off to your coding agent | GSD spawns executor agents in fresh contexts |
| Context management | Not sago's concern — your agent manages its own context | Core feature — fights "context rot" by spawning fresh 200k-token windows per task |
| Phase transitions | Explicit phase gate (sago replan) with code review and optional replan |
Automatic wave-based execution with /gsd:execute-phase |
| Research | You write PROJECT.md + REQUIREMENTS.md (or generate from a prompt) | Spawns parallel researcher agents to investigate the domain |
| Review | ReviewerAgent runs between phases via sago replan, saves findings to STATE.md |
/gsd:verify-work with interactive debug agents |
When to use GSD: You use Claude Code exclusively and want a fully automated pipeline — research, plan, execute, verify — all within Claude Code's sub-agent system. GSD's context rotation (fresh windows per task) is its killer feature for large projects.
When to use sago: You want to use different coding agents (or switch between them), want to use a non-Claude LLM for planning, or prefer an explicit human-in-the-loop workflow where you review the plan and gate phase transitions yourself. Sago is the project manager and control plane; you pick the developer.
fyn sync --group dev # install the project and dev dependencies
fyn run test # run all tests
fyn run test -- tests/test_parser.py -v # single file
fyn run test -- tests/test_parser.py::test_name -v # single test
fyn run lint # lint
fyn run format # format
fyn run format-check # formatting check
fyn run typecheck # type check (strict mode)
fyn run check # full local quality gate
skylos src/ # dead code detectionThis project was vibecoded with Claude Code.
Sago takes inspiration from:
- GSD (Get Shit Done) — spec-driven development and sub-agent orchestration for Claude Code
- Claude Flow — multi-agent orchestration platform with wave-based task coordination
Dead code is kept in check by Skylos.
Apache 2.0. See LICENSE.
