Exodia is a local-first harness for triage, verification, human clarification, and controlled execution of technical work items.
It is built for environments where an agent should not jump directly from "ticket received" to "code changed" without policy checks, scoped execution rules, and a safe way to ask humans for missing context.
The best feature in the runtime is the human-in-the-loop clarification flow:
- an agent can ask a clarifying question on Slack and/or on the ticket itself
- the run pauses safely in
awaiting_response - the next run resumes from the first valid answer
- high-signal answers are distilled into memory and reused later
This makes the workflow much more realistic than a simple "ticket in, PR out" demo. The system is designed to stop when confidence is not high enough, ask, wait, resume, and remember.
A short transcript of that flow lives in DEMO.md.
The runtime is built to:
- read work items from a configured source
- map the request to a product and repository target
- reuse operational memory and optional semantic retrieval
- verify payloads, changed paths, command preflight, and public hygiene
- pause for human-in-the-loop clarification when confidence is not high enough
- execute branch, commit, and pull request steps only when policy allows it
Many ticket-to-code automation flows are unsafe because they collapse triage, policy, and execution into one step.
Exodia separates those concerns into explicit stages:
- triage
- verification
- execution
- reporting
The goal is not autonomous merge automation. The goal is a controlled local harness for serious engineering workflows.
It is not intended for deployment as a publicly exposed service.
Clarification requests can be routed to Slack, ticket comments, or both.
Typical flow:
- the triage or verification step finds ambiguity
- the runtime posts a question on Slack and/or on the ticket
- the ticket is marked as waiting for input
- the next run collects replies
- the first valid answer wins
- the run resumes from that answer
- useful answers are captured into ticket memory and semantic memory
This behavior is implemented in the interaction layer under src/interaction/.
For a compact example of the full Slack -> answer -> resume -> memory loop, see DEMO.md.
Ticket source
-> Jira adapter (mock | mcp)
-> TriageAgent
-> llm-context adapter
-> ticket memory
-> llm-memory adapter
-> optional SQL diagnostics
-> optional human clarification request
-> VerificationAgent
-> payload checks
-> path policy
-> command preflight
-> public hygiene scan
-> optional human clarification request
-> ExecutionAgent
-> bitbucket adapter (mock | mcp)
-> optional SQL diagnostics
-> Reports
-> triage report
-> execution report
-> final report
-> audit trail
Main areas:
src/orchestration/: top-level run orchestrationsrc/adapters/: runtime adapters and external integration boundariessrc/agents/: triage, verification, and execution stagessrc/interaction/: deferred question/answer loop and transport routingsrc/mcp/: MCP bridge and registry/client gluesrc/security/: public hygiene scanning and redactionsrc/logging/: structured run loggingsrc/monitoring/: local monitoring over run summaries and JSONL logssrc/scheduling/: manual-first scheduling profiles with lock protectionsrc/reporting/: final report generationconfig/: publishable example configurations plus local setup guidance
Guard rails in the runtime include:
allowMergeblocked by defaultallowRealPrsrequiring explicit enablement- trust-level separation between
mock,mcp-readonly, andmcp-write - allowlists for repositories, base branches, commands, and MCP actions
- public-hygiene scanning for sensitive strings and placeholder safety
- redaction across reports, logs, and semantic memory
- deferred human-in-the-loop interaction when confidence is insufficient
Requirements:
- Node.js 22+
- local configuration derived from the example files in
config/
Install:
npm installCommon flows:
node src/cli.js triage --config ./config/harness.config.example.json --dry-run
node src/cli.js run --config ./config/harness.config.example.json --dry-run
node src/cli.js audit --config ./config/harness.config.example.json
node src/cli.js review --config ./config/harness.config.example.json
node src/cli.js questions --config ./config/harness.config.example.json
node src/cli.js monitor --config ./config/harness.config.example.json --limit 20
node src/cli.js schedule-run --config ./config/harness.config.example.json --profile triageExodia keeps the agent runtime provider-agnostic.
Current provider order:
codex-clifor low-cost local testing through a subprocess wrapperopenaifor the first production-grade API integrationclaude,openrouter,ollama, andlmstudioas follow-up providers
The codex-cli provider expects a local command that:
- reads one JSON envelope from stdin
- returns one JSON object on stdout
- uses
EXODIA_AGENT_RUNTIME_PHASEto decide whether it is handlinganalysis,audit, orimplementation
The runtime passes a payload shaped like this:
{
"phase": "analysis",
"provider": "codex-cli",
"model": "",
"requireStructuredOutput": true,
"payload": {
"ticket": { "key": "GEN-100", "summary": "..." }
}
}For a local codex-driven setup, configure agentRuntime.provider = "codex-cli" in an untracked config and point agentRuntime.providers["codex-cli"].command to your wrapper.
This repository includes a ready wrapper at scripts/agent-runtime-codex-wrapper.mjs.
Recommended local wiring:
command = "node"args = ["./scripts/agent-runtime-codex-wrapper.mjs"]env.EXODIA_CODEX_COMMAND = "codex"- optional
env.EXODIA_CODEX_MODEL - optional
env.EXODIA_CODEX_USE_OSS = "true"plusenv.EXODIA_CODEX_LOCAL_PROVIDER = "ollama" | "lmstudio"
For direct API runs, switch the untracked config to agentRuntime.provider = "openai" and set the API key only in the launcher session or dashboard, never in repo files.
For local HTTP-compatible runs, you can switch to agentRuntime.provider = "ollama" or agentRuntime.provider = "lmstudio" in an untracked config:
ollamadefaults tohttp://127.0.0.1:11434/v1lmstudiodefaults tohttp://127.0.0.1:1234/v1- both expect a locally running server and an installed model name in the provider config
- for preliminary local tests, cap
maxTokensin the provider config so slow local models do not stall the whole run
Publishable example configs:
config/harness.config.example.jsonconfig/harness.config.mcp.example.jsonconfig/harness.config.real.example.jsonconfig/harness.config.triage.codex-local.example.json
Real values should stay in local untracked files such as:
config/local/harness.local.jsonconfig/local/harness.mcp.local.jsonconfig/local/harness.real.local.jsonconfig/codex.mcp.local.toml
The repository is explicitly meant to stay free of:
- local paths
- real tenant identifiers
- real repository and branch names
- secret material
- real MCP bridge command lines
Do not use file .env.
Do not use file .env.local.
Pass credentials only through a PowerShell launcher or the local MCP dashboard.
If you want the harness to work well, the incoming work item needs a clean shape.
A compact recommended template lives in ticket-handoff-template.md.
This repository is in active development. The current runtime already demonstrates the intended architecture and safety model, and it is strong enough to show serious engineering decisions around guard rails, memory, MCP integration, and human-in-the-loop recovery.
Built with AI-assisted workflows, while architecture, tradeoffs, integration, review, and validation were directed by the author.