Skip to content

copilotzhq/copilotz

Repository files navigation

╔═════════════════════════════════════════════════════════════════╗
║                                                                 ║
║   ██████╗ ██████╗ ██████╗ ██╗██╗      ██████╗ ████████╗███████╗ ║
║  ██╔════╝██╔═══██╗██╔══██╗██║██║     ██╔═══██╗╚══██╔══╝╚══███╔╝ ║
║  ██║     ██║   ██║██████╔╝██║██║     ██║   ██║   ██║     ███╔╝  ║
║  ██║     ██║   ██║██╔═══╝ ██║██║     ██║   ██║   ██║    ███╔╝   ║
║  ╚██████╗╚██████╔╝██║     ██║███████╗╚██████╔╝   ██║   ███████╗ ║
║   ╚═════╝ ╚═════╝ ╚═╝     ╚═╝╚══════╝ ╚═════╝    ╚═╝   ╚══════╝ ║
║                                                                 ║
╚═════════════════════════════════════════════════════════════════╝

Copilotz

The full-stack framework for AI applications.

LLM wrappers give you chat. Copilotz gives you everything else: persistent memory, RAG, tool calling, background jobs, and multi-tenancy — in one framework.

Build AI apps, not AI infrastructure.

Deno TypeScript License: MIT


The Problem

Building AI features today feels like building websites in 2005.

You start with an LLM wrapper. Then you need memory — so you add Redis. Then RAG — so you add a vector database. Then your tool generates an image — now you need asset storage and a way to pass it back to the LLM. Then background jobs, multi-tenancy, tool calling, handling media, observability... Before you know it, you're maintaining infrastructure instead of building your product.

There's no Rails for AI. No Next.js. Just parts.

The Solution

Copilotz is the full-stack framework for AI applications. Everything you need to ship production AI, in one package:

What You Need What Copilotz Gives You
Memory Knowledge graph that remembers users, conversations, and entities
RAG Document ingestion, chunking, embeddings, and semantic search
Skills SKILL.md-based instructions with progressive disclosure and a bundled native assistant
Tools 27 native tools + OpenAPI integration + MCP support
Assets Automatic extraction, storage, and LLM resolution of images and files
Background Jobs Event queue with persistent workers and custom processors
Multi-tenancy Schema isolation + namespace partitioning
Database PostgreSQL (production) or PGLite (development/embedded)
Channels Web (SSE), WhatsApp, and Zendesk — import and go
Streaming Real-time token streaming with async iterables
Collections Persist application-specific data via copiloz native Collections API
Usage & Cost Provider-native token usage tracking plus optional OpenRouter-based cost estimation

One framework. One dependency. Production-ready.


Quick Start

Create a New Project

Scaffold a full Copilotz project with API routes, a React chat UI, and everything wired up:

deno run -Ar jsr:@copilotz/copilotz/create my-app

Then follow the prompts:

cd my-app
# Edit .env with your API keys
deno task dev           # start the API server
deno task dev:web       # start the web UI

This uses the copilotz-starter template -- a minimal but complete reference app with threads, knowledge graph, assets, and a chat UI.

Add to an Existing Project

deno add jsr:@copilotz/copilotz

Interactive Mode (Fastest)

Try Copilotz instantly with an interactive chat:

import { createCopilotz } from "@copilotz/copilotz";

const copilotz = await createCopilotz({
  agents: [{
    id: "assistant",
    name: "Assistant",
    role: "assistant",
    instructions: "You are a helpful assistant. Remember what users tell you.",
    llmOptions: { provider: "openai", model: "gpt-4o-mini" },
  }],
  dbConfig: { url: ":memory:" },
});

// Start an interactive REPL — streams responses to stdout
copilotz.start({ banner: "🤖 Chat with your AI! Type 'quit' to exit.\n" });

Run it: OPENAI_API_KEY=your-key deno run --allow-net --allow-env chat.ts

llmOptions is persisted as safe LLMConfig. If you need to inject secrets or runtime-only provider overrides without persisting them in LLM_CALL events, use security.resolveLLMRuntimeConfig in createCopilotz().

File-Based Resources

Organize agents, tools, and APIs in a directory structure — no giant config objects:

import { createCopilotz } from "@copilotz/copilotz";

const copilotz = await createCopilotz({
  resources: { path: "./resources" }, // Loads agents/, tools/, apis/ automatically
  dbConfig: { url: Deno.env.get("DATABASE_URL") },
});

Usage and Cost Tracking

Copilotz records provider-native LLM usage when the upstream provider exposes it, and can estimate per-call cost using OpenRouter model pricing.

  • Cost estimation is enabled by default with llmOptions.estimateCost !== false
  • Use llmOptions.pricingModelId to override the OpenRouter model id when automatic mapping is not enough
  • Cost is only estimated when usage came from the provider, not from Copilotz's rough fallback token heuristic
  • Admin overview and admin agent summaries aggregate both token and cost totals from persisted llm_usage nodes

See the copilotz-starter template for a complete example.

Programmatic Mode

For applications, use run() for full control:

import { createCopilotz } from "@copilotz/copilotz";

const copilotz = await createCopilotz({
  agents: [{
    id: "assistant",
    name: "Assistant",
    role: "assistant",
    instructions: "You are a helpful assistant with a great memory.",
    llmOptions: { provider: "openai", model: "gpt-4o-mini" },
  }],
  dbConfig: { url: ":memory:" },
});

// First conversation
const result = await copilotz.run({
  content: "Hi! I'm Alex and I love hiking in the mountains.",
  sender: { type: "user", name: "Alex" },
});
await result.done;

// Later... your AI remembers
const result2 = await copilotz.run({
  content: "What do you know about me?",
  sender: { type: "user", name: "Alex" },
});
await result2.done;
// → "You're Alex, and you love hiking in the mountains!"

await copilotz.shutdown();

Why Copilotz?

Memory That Actually Works

Most AI frameworks give you chat history. Copilotz gives you a knowledge graph — users, conversations, documents, and entities all connected. Your AI doesn't just remember what was said; it understands relationships.

// Entities are extracted automatically
await copilotz.run({ content: "I work at Acme Corp as a senior engineer" });

// Later, your AI knows:
// - User: Alex
// - Organization: Acme Corp
// - Role: Senior Engineer
// - Relationship: Alex works at Acme Corp

Tools That Do Things

27 built-in tools for file operations, HTTP requests, RAG, agent memory, and more. Plus automatic tool generation from OpenAPI specs and MCP servers.

const copilotz = await createCopilotz({
  agents: [{
    // ...
    allowedTools: [
      "read_file",
      "write_file",
      "http_request",
      "search_knowledge",
    ],
  }],
  apis: [{
    id: "github",
    openApiSchema: myOpenApiSchema, // Object or JSON/YAML string
    auth: { type: "bearer", token: process.env.GITHUB_TOKEN },
  }],
});

Multi-Tenant From Day One

Schema-level isolation for hard boundaries. Namespace-level isolation for logical partitioning. Your SaaS is ready for customers on day one.

// Each customer gets complete isolation
await copilotz.run(message, {
  schema: "tenant_acme", // PostgreSQL schema
  namespace: "workspace:123", // Logical partition
});

Assets Without the Headache

When your tool generates an image or fetches a file, what happens next? With most frameworks, you're on your own. Copilotz automatically extracts assets from tool outputs, stores them, and resolves them for vision-capable LLMs.

// Your tool just returns base64 data
const generateChart = {
  id: "generate_chart",
  execute: async ({ data }) => ({
    mimeType: "image/png",
    dataBase64: await createChart(data),
  }),
};

// Copilotz automatically:
// 1. Detects the asset in the tool output
// 2. Stores it (filesystem, S3, or memory)
// 3. Replaces it with an asset:// reference
// 4. Resolves it to a data URL for the next LLM call
// 5. Emits an ASSET_CREATED event for your hooks

Need finer control? Agents can opt out of persisting assets they generate via assetOptions.produce.persistGeneratedAssets = false, which also sanitizes inline base64/data URLs returned by their tool calls before persistence.

Everything Is a Resource

Agents, tools, processors, LLM providers, embeddings, storage backends — they're all resources loaded through the same system. Use presets/imports to decide what loads, then override anything you need:

const copilotz = await createCopilotz({
  resources: {
    path: "./resources",
    preset: ["core", "code"],
    imports: ["channels.whatsapp", "tools.fetch_asset"],
    filterResources: (resource, type) =>
      !(type === "tool" && resource.id === "persistent_terminal"),
  },
  processors: [{ // custom event processor
    eventType: "NEW_MESSAGE",
    shouldProcess: (event) => event.payload.needsApproval,
    process: async (event, deps) => {
      return { producedEvents: [] };
    },
  }],
});

Production Infrastructure, Not Prototypes

Event-driven architecture with persistent queues. Background workers for heavy processing. Custom processors for your business logic. This is infrastructure you'd build anyway — already built.


What's Included

Skills & Native Assistant

SKILL.md files teach agents how to perform framework tasks and execution workflows. Progressive disclosure keeps prompts lean — only names and descriptions are loaded upfront; full instructions are fetched on-demand. A bundled native assistant uses these skills to help with general work and to build Copilotz projects interactively when needed.

Agents

Multi-agent orchestration with persistent targets, @mentions, loop prevention, and inter-agent communication. Agents can remember learnings across conversations with persistent memory.

Collections

Type-safe data storage on top of the knowledge graph with JSON Schema validation.

RAG Pipeline

Document ingestion → chunking → embeddings → semantic search. Works out of the box.

Channels

Pre-built ingress and egress adapters for Web (SSE), WhatsApp Cloud API, and Zendesk Sunshine. The route model is ingress -> runtime -> egress, so you can keep same-channel flows or mix transports like /channels/web/to/zendesk.

import { withApp } from "@copilotz/copilotz/server";

const app = withApp(copilotz).app;

await app.handle({
  resource: "channels",
  method: "POST",
  path: ["whatsapp"],
  body,
  headers,
  rawBody,
});

Config defaults to env vars (WHATSAPP_*, ZENDESK_*). Built-in adapters are also exportable directly:

import {
  whatsappEgressAdapter,
  whatsappIngressAdapter,
  zendeskEgressAdapter,
  zendeskIngressAdapter,
} from "@copilotz/copilotz/server/channels";

WhatsApp and Zendesk adapters handle the full lifecycle internally — verify the webhook, parse the payload, run the agent, and push responses back to the platform API.

Streaming

Real-time token streaming with callbacks and async iterables.

Assets

Automatic extraction and storage of images, files, and media from tool outputs. Seamless resolution for vision LLMs.


Documentation

Getting Started

Core Concepts

  • Agents — Multi-agent configuration and communication
  • Events — Event-driven processing pipeline
  • Tools — Native tools, APIs, and MCP integration

Data Layer

  • Database — PostgreSQL, PGLite, and the knowledge graph
  • Tables Structure — Database schema reference
  • Collections — Type-safe data storage
  • RAG — Document ingestion and semantic search

Resources & Extensibility

Advanced

  • Skills — SKILL.md format, discovery, and the native assistant
  • Configuration — Full configuration reference
  • Assets — File and media storage
  • Server Helpers — Framework-independent handlers and transport routes
  • API Reference — Complete API documentation

Requirements

  • Deno 2.0+
  • PostgreSQL 13+ (production) or PGLite (development/embedded)
  • LLM API key (OpenAI, Anthropic, Gemini, Groq, DeepSeek, or Ollama)

License

MIT — see LICENSE


Stop gluing. Start shipping.

About

Copilotz Client

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors