diff --git a/nginx.conf b/nginx.conf index 91d75be021ee..64cbd13ed8e7 100644 --- a/nginx.conf +++ b/nginx.conf @@ -173,6 +173,7 @@ http { rewrite ^/docs/en/token-api/mcp/cline/$ $scheme://$http_host/docs/en/ai-suite/token-api-mcp/ permanent; rewrite ^/docs/en/token-api/mcp/cline/$ $scheme://$http_host/docs/en/ai-suite/token-api-mcp/ permanent; rewrite ^/docs/en/token-api/endpoint-pricing/$ $scheme://$http_host/docs/en/token-api/endpoints/pricing/ permanent; + rewrite ^/docs/en/ai-suite/token-api-skills/quick-setup$ $scheme://$http_host/docs/en/token-api/skills/ permanent; # Temporary redirects (302) rewrite ^/docs/en/querying/graph-client/$ $scheme://$http_host/docs/en/subgraphs/querying/graph-client/README/ redirect; rewrite ^/docs/en/developing/graph-ts/$ $scheme://$http_host/docs/en/subgraphs/developing/creating/graph-ts/README/ redirect; diff --git a/website/.Rhistory b/website/.Rhistory new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/website/src/pages/en/ai-suite/_meta-titles.json b/website/src/pages/en/ai-suite/_meta-titles.json index 524bebbb3902..c7d94b5ce928 100644 --- a/website/src/pages/en/ai-suite/_meta-titles.json +++ b/website/src/pages/en/ai-suite/_meta-titles.json @@ -2,5 +2,7 @@ "ai-introduction": "The Graph's AI", "subgraph-mcp": "Subgraph MCP", "token-api-mcp": "Token API MCP", - "token-api-skills": "Token API Skills" + "subgraph-skills": "Subgraph Skills", + "token-api-skills": "Token API Skills", + "substreams-skills": "Substreams Skills" } diff --git a/website/src/pages/en/ai-suite/_meta.js b/website/src/pages/en/ai-suite/_meta.js index ec74610b8c37..329c1d0b8faa 100644 --- a/website/src/pages/en/ai-suite/_meta.js +++ b/website/src/pages/en/ai-suite/_meta.js @@ -4,5 +4,7 @@ export default { 'ai-introduction': titles['ai-introduction'], 'subgraph-mcp': titles['subgraph-mcp'], 'token-api-mcp': titles['token-api-mcp'], + 'subgraph-skills': titles['subgraph-skills'], 'token-api-skills': titles['token-api-skills'], + 'substreams-skills': titles['substreams-skills'], } diff --git a/website/src/pages/en/ai-suite/subgraph-skills.mdx b/website/src/pages/en/ai-suite/subgraph-skills.mdx new file mode 100644 index 000000000000..cd9ec3216ade --- /dev/null +++ b/website/src/pages/en/ai-suite/subgraph-skills.mdx @@ -0,0 +1,119 @@ +--- +title: Agent Skills for Subgraphs +sidebarTitle: Subgraph Skills +--- + +A collection of AI agent skills providing expert knowledge for developing, testing, and deploying subgraphs with The Graph protocol. + +## Overview + +This repository provides subgraph development expertise for AI coding assistants in **two formats**: + +| Format | Location | Use With | +| ---------------------- | ----------- | ------------------- | +| **Claude Code Plugin** | `skills/` | Claude Code CLI | +| **OpenClaw Skills** | `openclaw/` | OpenClaw / Clawdbot | + +Same knowledge, different agent platforms. + +## Skills + +### Subgraph Development + +Core development knowledge including: + +- Schema design and GraphQL types +- Manifest configuration (subgraph.yaml) +- AssemblyScript mapping handlers +- Data source templates +- Contract bindings and calls +- **Subgraph Composition** - Combine multiple subgraphs +- **Common Patterns** - ERC20, DEX, NFT, Lending, Staking, Governance +- Subgraph Uncrashable (safe code generation) +- Deployment workflows + +### Subgraph Optimization + +Performance best practices from The Graph docs: + +- Pruning with indexerHints +- Arrays with @derivedFrom +- Immutable entities and Bytes as IDs +- Avoiding eth_calls +- Timeseries and aggregations +- Grafting for hotfixes + +### Subgraph Testing + +Quality assurance with Matchstick and Subgraph Linter: + +- **Subgraph Linter** - Static analysis to catch bugs before runtime +- **Common Errors** - Troubleshooting guide for indexing issues +- Unit testing setup and patterns with Matchstick +- Mock events and contract calls +- Entity assertions +- Data source mocking +- CI/CD integration + +## Installation + +### Claude Code + +```bash +# Add as a Claude Code plugin +claude plugins add PaulieB14/subgraphs-skills +``` + +### OpenClaw / Clawdbot + +```bash +# Copy skills to OpenClaw directory +cp -r openclaw/subgraph-* ~/.openclaw/skills/ + +# Or via ClawHub (when published) +clawdbot skill install subgraph-dev +clawdbot skill install subgraph-optimization +clawdbot skill install subgraph-testing +``` + +## Usage Examples + +Once installed, the AI assistant will have access to subgraph development expertise: + +**Schema Design:** + +> "Create a schema for tracking DEX swaps with proper relationships" + +**Optimization:** + +> "How do I optimize my subgraph for faster indexing?" + +**Testing:** + +> "Write unit tests for my Transfer event handler" + +## Resources + +- [The Graph Documentation](https://thegraph.com/docs/) +- [Subgraph Best Practices](https://thegraph.com/docs/en/subgraphs/best-practices/pruning/) +- [Subgraph Composition](https://thegraph.com/docs/en/subgraphs/guides/subgraph-composition/) - Combine multiple subgraphs +- [Subgraph Linter](https://thegraph.com/docs/en/subgraphs/developing/subgraph-linter/) - Static analysis tool +- [Subgraph Uncrashable](https://thegraph.com/docs/en/subgraphs/developing/subgraph-uncrashable/) - Safe code generation +- [Matchstick Testing Framework](https://thegraph.com/docs/en/subgraphs/developing/creating/unit-testing-framework/) +- [AssemblyScript API](https://thegraph.com/docs/en/subgraphs/developing/creating/assemblyscript-api/) + +## Platforms + +This skill pack works with: + +| Platform | Description | Link | +| --- | --- | --- | +| **Claude Code** | Anthropic's official CLI for Claude | [claude.ai/claude-code](https://claude.ai/claude-code) | +| **OpenClaw** | Open-source AI agent framework | [github.com/openclaw/openclaw](https://github.com/openclaw/openclaw) | + +## Acknowledgments + +- Built with [Claude](https://claude.ai) (Anthropic's AI assistant) +- Subgraph expertise from [The Graph Documentation](https://thegraph.com/docs/) +- Inspired by [AGENTS.md](https://github.com/agentsmd/agents.md) format +- OpenClaw format based on [substreams-skills](https://github.com/streamingfast/substreams-skills) diff --git a/website/src/pages/en/ai-suite/substreams-mcp/search.mdx b/website/src/pages/en/ai-suite/substreams-mcp/search.mdx new file mode 100644 index 000000000000..f3e41a0fc583 --- /dev/null +++ b/website/src/pages/en/ai-suite/substreams-mcp/search.mdx @@ -0,0 +1,138 @@ +--- +title: Substreams Search MCP +sidebarTitle: Substreams Search +--- + +MCP server that lets AI agents search, inspect, and analyze [Substreams](https://substreams.dev) packages — from registry discovery to sink deployment. Supports dual transport for local clients and SSE/HTTP for remote agents (OpenClaw, custom frameworks). + +## Tools + +### Search Substreams + +`search_substreams`: Search the substreams.dev package registry. + +| Parameter | Type | Default | Description | +| --- | --- | --- | --- | +| `query` | string (required) | — | Search term, e.g. `"solana dex"` or `"uniswap"` | +| `sort` | string | `"most_downloaded"` | `most_downloaded`, `alphabetical`, `most_used`, `last_uploaded` | +| `network` | string | — | Filter by chain: `ethereum`, `solana`, `arbitrum-one`, etc. | + +Returns package name, URL, creator, network, version, published date, and download count. + +### Inspect Package + +`inspect_package`: Inspect a Substreams package (.spkg) to see its full module graph, protobuf types, and metadata. + +| Parameter | Type | Description | +| --------- | ----------------- | ---------------------------- | +| `url` | string (required) | Direct URL to a `.spkg` file | + +Returns: + +- Package metadata (name, version, documentation, network) +- All modules with their kind (map/store/blockIndex), output types, and update policies +- Full DAG: each module's `dependsOn` and `dependedBy` relationships +- Input chain for each module (source blocks, other maps, stores with get/deltas mode, params) +- List of all protobuf output types and proto files +- Mermaid diagram of the module graph + +### List Package Modules + +`list_package_modules`: Lightweight alternative to `inspect_package` — just the module names, types, and inputs/outputs. + +| Parameter | Type | Description | +| --------- | ----------------- | ---------------------------- | +| `url` | string (required) | Direct URL to a `.spkg` file | + +### Get Sink Configuration + +`get_sink_config`: Analyze a package's sink configuration and generate ready-to-run CLI commands. + +| Parameter | Type | Description | +| --------- | ----------------- | ---------------------------- | +| `url` | string (required) | Direct URL to a `.spkg` file | + +Returns one of three results: + +- **`sink_configured`** — Package has an embedded sink config. Extracts the SQL schema (for SQL sinks), identifies the sink module and type, and generates `install`, `setup`, and `run` commands with the correct network endpoint. +- **`no_sink_config_but_compatible_modules_found`** — No embedded config, but modules output sink-compatible types (e.g. `DatabaseChanges`). Identifies them and suggests how to wire up sinking. +- **`no_sink_support`** — No sink-compatible modules. Lists all module output types so you know what custom consumer you'd need. + +## Workflow + +``` +search_substreams("uniswap", network: "polygon") + → find package, get spkg.io URL + +inspect_package("https://spkg.io/creator/package-v1.0.0.spkg") + → see module DAG, output types, what it produces + +get_sink_config("https://spkg.io/creator/package-v1.0.0.spkg") + → get SQL schema + CLI commands to deploy +``` + +## Quick Start (npx) + +No installation needed: + +### Claude Desktop / Cursor / Claude Code + +Add to your MCP config (`claude_desktop_config.json`, `~/.cursor/mcp.json`, or `~/.claude/mcp.json`): + +```json +{ + "mcpServers": { + "substreams-search": { + "command": "npx", + "args": ["substreams-search-mcp"] + } + } +} +``` + +### OpenClaw / Remote Agents (SSE) + +Start the server with the HTTP transport: + +```bash +# Dual transport — stdio + SSE on port 3849 +npx substreams-search-mcp --http + +# SSE only (for remote/server deployments) +npx substreams-search-mcp --http-only + +# Custom port +MCP_HTTP_PORT=4000 npx substreams-search-mcp --http +``` + +Then point your agent at the SSE endpoint: + +```json +{ + "mcpServers": { + "substreams-search": { + "url": "http://localhost:3849/sse" + } + } +} +``` + +### Transport Modes + +| Invocation | Transports | Use case | +| --------------------------------------- | ----------------- | ----------------------------------- | +| `npx substreams-search-mcp` | stdio | Claude Desktop, Cursor, Claude Code | +| `npx substreams-search-mcp --http` | stdio + SSE :3849 | Dual — local + remote agents | +| `npx substreams-search-mcp --http-only` | SSE :3849 | OpenClaw, remote deployments | + +A `/health` endpoint is available at `http://localhost:3849/health` when HTTP transport is active. + +## How it works + +- **Search**: The substreams.dev registry has no public API. This server scrapes the package listing pages, paginates through all results, deduplicates, and returns structured JSON. Multi-word queries search for the first word server-side and filter the rest client-side. +- **Inspect**: Uses [`@substreams/core`](https://github.com/substreams-js/substreams-js) to fetch and parse `.spkg` files (protobuf-encoded Substreams packages), extracting module definitions, DAG relationships, and proto type information. +- **Sink config**: Reads the embedded `sinkConfig` (a `google.protobuf.Any` field) from the package, decodes it based on the type URL, and maps networks to Substreams endpoints for correct CLI commands. + +## Acknowledgments + +Thanks for PaulieB for creating the [Substreams Search MCP](https://github.com/PaulieB14/substreams-search-mcp) diff --git a/website/src/pages/en/ai-suite/substreams-skills.mdx b/website/src/pages/en/ai-suite/substreams-skills.mdx new file mode 100644 index 000000000000..6f742f896eca --- /dev/null +++ b/website/src/pages/en/ai-suite/substreams-skills.mdx @@ -0,0 +1,89 @@ +--- +title: Agent Skills for Substreams +sidebarTitle: Substreams Skills +--- + +AI coding assistants can be enhanced with specialized Substreams expertise through agent skills. These open-source knowledge packages give AI assistants deep understanding of Substreams development patterns, best practices, and debugging techniques. + +## Available Skills + +### Substreams Development (`substreams-dev`) + +Expert knowledge for developing, building, and debugging Substreams projects on any blockchain: + +- Creating and configuring `substreams.yaml` manifests +- Writing efficient Rust modules (map, store, index types) +- Protobuf schema design and code generation +- Performance optimization and avoiding excessive cloning +- Debugging and troubleshooting common issues + +### Substreams SQL (`substreams-sql`) + +Expert knowledge for building SQL database sinks from Substreams data: + +- **Database Changes (CDC)** - Stream individual row changes for real-time consistency +- **Relational Mappings** - Transform data into normalized tables with proper relationships +- **PostgreSQL** - Advanced patterns, indexing strategies, and performance optimization +- **ClickHouse** - Analytics-optimized schemas, materialized views, and time-series patterns +- **Schema Design** - Best practices for blockchain data modeling + +### Substreams Testing (`substreams-testing`) + +Expert knowledge for testing Substreams applications at all levels: + +- **Unit Testing** - Testing individual functions with real blockchain data +- **Integration Testing** - End-to-end workflows with real block processing +- **Performance Testing** - Benchmarking, memory profiling, and production mode validation +- **FireCore Tools** - Using Firehose, StreamingFast API, and testing utilities +- **CI/CD Integration** - Automated testing pipelines and regression detection + +## Installation + +### Claude Code + +Install the plugin from the marketplace: + +```bash +claude plugin marketplace add https://github.com/streamingfast/substreams-skills +``` + +Then enable the skills: + +1. Run `/plugin` to open the plugin manager +2. Go to the **Discover** tab +3. Find and install the `substreams-dev` plugin (which pulls all defined skills automatically) +4. Restart Claude instance(s) for skills to be discovered + +After installation, Claude automatically uses Substreams expertise when working on relevant projects. + +**Alternative: Local Development** + +Clone and load directly without installing from the marketplace: + +```bash +git clone https://github.com/streamingfast/substreams-skills.git +claude --plugin-dir ./substreams-skills +``` + +### Cursor + +Clone the repository and add the skill directory path in Cursor settings: + +``` +~/substreams-skills/skills/substreams-dev +``` + +### VS Code + +VS Code 1.107+ supports Claude Skills as an experimental feature: + +1. Enable the experimental feature in settings +2. Add skill paths to your configuration +3. Skills will be available to Claude in VS Code + +See the [VS Code 1.107 release notes](https://code.visualstudio.com/updates/v1_107#_reuse-your-claude-skills-experimental) for details. + +## Resources + +- [Substreams Skills Repository](https://github.com/streamingfast/substreams-skills) +- [Claude Code Plugins Documentation](https://docs.anthropic.com/en/docs/claude-code/plugins) diff --git a/website/src/pages/en/ai-suite/token-api-skills/quick-setup.mdx b/website/src/pages/en/ai-suite/token-api-skills.mdx similarity index 94% rename from website/src/pages/en/ai-suite/token-api-skills/quick-setup.mdx rename to website/src/pages/en/ai-suite/token-api-skills.mdx index b12833b6531a..cfdf825a533d 100644 --- a/website/src/pages/en/ai-suite/token-api-skills/quick-setup.mdx +++ b/website/src/pages/en/ai-suite/token-api-skills.mdx @@ -1,5 +1,6 @@ --- -title: Quick Setup +title: Agent Skills for Token API +sidebarTitle: Token API Skills --- ## Quick Setup diff --git a/website/src/pages/en/ai-suite/token-api-skills/_meta.js b/website/src/pages/en/ai-suite/token-api-skills/_meta.js deleted file mode 100644 index 2bce33552b7c..000000000000 --- a/website/src/pages/en/ai-suite/token-api-skills/_meta.js +++ /dev/null @@ -1,3 +0,0 @@ -export default { - 'quick-setup': '', -} diff --git a/website/src/pages/en/subgraphs/guides/_meta.js b/website/src/pages/en/subgraphs/guides/_meta.js index f187cf82b68f..cadb6a5fe392 100644 --- a/website/src/pages/en/subgraphs/guides/_meta.js +++ b/website/src/pages/en/subgraphs/guides/_meta.js @@ -10,5 +10,6 @@ export default { 'secure-api-keys-nextjs': '', polymarket: '', agent0: '', + 'x402-payments': '', 'contract-analyzer': '', } diff --git a/website/src/pages/en/subgraphs/guides/x402-payments.mdx b/website/src/pages/en/subgraphs/guides/x402-payments.mdx new file mode 100644 index 000000000000..d696c104bd58 --- /dev/null +++ b/website/src/pages/en/subgraphs/guides/x402-payments.mdx @@ -0,0 +1,147 @@ +--- +title: Using x402 to Pay for Subgraph Data on The Graph Network +sidebarTitle: x402 Subgraph Payments +--- + +Pay-per-query access to Subgraphs on The Graph Network using the x402 payment protocol. + +The Graph's Subgraph Gateways accept x402 payments for per-query access on The Graph Network. Agents and applications can pay in USDC over HTTP without an API key. + +## Overview + +The Graph's x402 Subgraph endpoints enable: + +- **Per-query access** to any Subgraph published on The Graph Network +- **USDC payments** on Base (mainnet) and Base Sepolia (testnet) +- **No API keys, accounts, or sessions**: payment and access happen in a single HTTP round trip +- **Compatible with any x402 client**, with first-class support via `@graphprotocol/client-x402` + +The existing API-key endpoints continue to work unchanged with x402 is an additional access path under `/api/x402/...`. + +## How It Works + +1. The client sends a GraphQL query to an `/api/x402/...` endpoint. +2. The Gateway responds with `402 Payment Required` and payment requirements (amount, network, asset, recipient). +3. The client signs a USDC payment payload and retries the request with the payment header. +4. The Gateway verifies the payment via a facilitator and returns the query result. + +## Network Environments + +| Environment | Base URL | Payment Network | USDC Token Address | +| --- | --- | --- | --- | +| Mainnet | `https://gateway.thegraph.com` | Base | `0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913` | +| Testnet | `https://testnet.gateway.thegraph.com` | Base Sepolia | `0x036CbD53842c5426634e7929541eC2318f3dCF7e` | + +## Options for Accessing the API + +### Option 1: API Key (Best for Humans) + +Get an API key from [Subgraph Studio](https://thegraph.com/studio) and include it in requests. + +**Endpoints:** + +- `POST /api/subgraphs/id/{subgraph_id}` +- `POST /api/deployments/id/{deployment_id}` + +**Header:** `Authorization: Bearer ` + +### Option 2: x402 Payment (Best for Agents) + +Pay per query with USDC. No API key required. + +**Endpoints:** + +- `POST /api/x402/subgraphs/id/{subgraph_id}` +- `POST /api/x402/deployments/id/{deployment_id}` + +## When to Use x402 + +x402 is well suited to: + +- **Autonomous agents** and short-lived processes that can't store long-term credentials +- **Per-query workloads** where pre-purchased credits don't fit the access pattern +- **Integrations that prefer HTTP-native payment** over account creation and key management + +For sustained, high-volume application use, the existing API-key flow remains the recommended path. + +## Minimal Examples + +### Using API Key + +```bash +curl -X POST https://gateway.thegraph.com/api/subgraphs/id/5zvR82QoaXYFyDEKLZ9t6v9adgnptxYpKpSbxtgVENFV \ + -H "Authorization: Bearer YOUR_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{"query": "{ tokens(first: 5) { symbol } }"}' +``` + +### Using x402 Payment + +Any x402 tooling that supports exact scheme will work with the gateway's x402 endpoints. We recommend to use the official Graph x402 client: + +```bash +npm install @graphprotocol/client-x402 +``` + +#### Option A: Command Line + +```bash +export X402_PRIVATE_KEY=0xabc123... + +npx graphclient-x402 "{ pairs(first: 5) { id } }" \ + --endpoint https://gateway.thegraph.com/api/x402/subgraphs/id/ \ + --chain base +``` + +#### Option B: Programmatic + +```typescript +import { createGraphQuery } from '@graphprotocol/client-x402' + +const query = createGraphQuery({ + endpoint: 'https://gateway.thegraph.com/api/x402/subgraphs/id/', + chain: 'base', +}) + +const result = await query('{ pairs(first: 5) { id } }') +``` + +#### Option C: Typed SDK (full type safety) + +```bash +npm install @graphprotocol/client-cli @graphprotocol/client-x402 +``` + +Configure `.graphclientrc.yml`: + +```yaml +customFetch: '@graphprotocol/client-x402' + +sources: + - name: uniswap + handler: + graphql: + endpoint: https://gateway.thegraph.com/api/x402/subgraphs/id/ + +documents: + - ./src/queries/*.graphql +``` + +Build and use: + +```bash +export X402_PRIVATE_KEY=0xabc123... +export X402_CHAIN=base +npx graphclient build +``` + +```typescript +import { execute, GetPairsDocument } from './.graphclient' + +const result = await execute(GetPairsDocument, { first: 5 }) +``` + +### Environment Variables + +- `X402_PRIVATE_KEY`: Wallet private key for payment signing +- `X402_CHAIN`: `base` (mainnet) or `base-sepolia` (testnet) diff --git a/website/src/pages/en/token-api/guides/_meta.js b/website/src/pages/en/token-api/guides/_meta.js index a3bef99e3420..d247cd6ba936 100644 --- a/website/src/pages/en/token-api/guides/_meta.js +++ b/website/src/pages/en/token-api/guides/_meta.js @@ -1,3 +1,4 @@ export default { gpt: '', + polymarket: '', } diff --git a/website/src/pages/en/token-api/guides/polymarket.mdx b/website/src/pages/en/token-api/guides/polymarket.mdx new file mode 100644 index 000000000000..4f790ad5f656 --- /dev/null +++ b/website/src/pages/en/token-api/guides/polymarket.mdx @@ -0,0 +1,71 @@ +--- +title: Polymarket Data Overview +sidebarTitle: Polymarket Data +--- + +The Graph's Token API provides a complete set of REST endpoints for querying Polymarket's on-chain prediction market data. These endpoints are purpose-built for **research, analytics, and historical analysis** — covering markets, platform-wide aggregates, and individual user performance. + +## Why Use The Graph for Polymarket Data + +Polymarket already exposes its own API for real-time order flow and WebSocket feeds optimized for low-latency trading. The Graph's Polymarket endpoints serve a different purpose: they give researchers, analysts, and builders structured access to **aggregated, historical, and cross-market data** that Polymarket's native API doesn't surface. + +### What Users Get with The Graph's Token API + +- **Platform-wide aggregates:** Daily or weekly volume, open interest, fees, and split/merge activity rolled up across every Polymarket market. Track the platform's growth trajectory, not just individual markets. +- **User leaderboards and PnL breakdowns:** Paginated rankings by volume, realized PnL, unrealized PnL, or trade count. Drill into any address to see per-token position history, cost basis, and entry prices. +- **Market-level open interest time-series:** Track collateral flows (splits, merges, redemptions) over time for any market. Understand conviction shifts beyond price alone. +- **On-chain activity feeds:** Chronological trade, split, merge, and redemption logs with transaction hashes and block numbers for full auditability. +- **OHLCV price history:** Standard candlestick data for any outcome token, ready for charting and backtesting. + +> **A note on latency:** If you need sub-second order book data or execution, Polymarket's native WebSocket endpoints are the right tool. The Graph's Polymarket endpoints are optimized for structured queries over historical and aggregated data, which feeds dashboards, research reports, and strategy backtests. + +## Getting Started + +All Polymarket endpoints are available under the Token API base URL: + +``` +https://token-api.thegraph.com/v1/polymarket/ +``` + +You'll need a Token API key. See the [Quick Start guide](https://claude.ai/token-api/quick-start/) to create one. + +- [Market Endpoints](/token-api/polymarket-markets/markets/) +- [Platform Endpoints](/token-api/polymarket-platform/platform/) +- [User Endpoints](/token-api/polymarket-users/users/) + +You can also set up the [Token API MCP](/ai-suite/token-api-mcp/introduction/) to query Polymarket in your AI assistant. + +### Polymarket Endpoints at a Glance + +| Group | Endpoint | What it returns | +| --- | --- | --- | +| **Markets** | `/v1/polymarket/markets` | Market metadata (question, outcomes, token IDs, volume, status) | +| **Markets** | `/v1/polymarket/markets/ohlcv` | OHLCV candlestick data per outcome token | +| **Markets** | `/v1/polymarket/markets/open-interest` | Open interest time-series (splits, merges, redemptions) | +| **Markets** | `/v1/polymarket/markets/activity` | Onchain trade and position activity feed | +| **Markets** | `/v1/polymarket/markets/positions` | All user positions for a given outcome token (leaderboard) | +| **Platform** | `/v1/polymarket/platform` | Platform-wide volume, OI, fees, and trade aggregates | +| **Users** | `/v1/polymarket/users` | User stats and leaderboard (volume, PnL, trade counts) | +| **Users** | `/v1/polymarket/users/positions` | Per-user position breakdown with cost basis and PnL | + +## Recommended Use Cases + +### 1. Portfolio review and self-analysis + +Use the User Lookup and User Positions endpoints to audit your own Polymarket account. See total PnL (realized and unrealized), review every position's cost basis and entry price, and identify which markets drove gains or losses. This is the fastest path to a personal Polymarket PnL statement. + +### 2. Leaderboard research and strategy mirroring + +Use User Lookup without an address to pull the global leaderboard ranked by PnL, volume, or trade count. Identify top-performing wallets, then drill into their positions via User Positions to study what they're buying, their sizing patterns, and where they're concentrated. Combine with Market OHLCV to backtest whether mirroring their entries would have been profitable. + +### 3. Market-level conviction analysis + +Use Market Open Interest alongside OHLCV to separate price moves from real capital commitment. A price spike with flat open interest is noise. A price move backed by rising OI (new splits) signals genuine conviction. Market Activity gives you the raw on-chain logs to verify. + +### 4. Platform health and trend monitoring + +Use Platform Aggregates to track Polymarket's overall trajectory, including daily volume trends, total open interest growth, fee revenue, and the split-to-merge ratio. This is essential context for anyone writing about prediction markets, building analytics dashboards, or evaluating Polymarket as a platform. + +### 5. AI-assisted research with MCP tools + +Pair these endpoints with The Graph's Token API MCP server to let AI agents query Polymarket data conversationally. Ask questions like "Who are the top 10 traders by PnL this week?" or "What was the open interest trend on the 2026 election market?" and get structured answers grounded in on-chain data.