Problem
When deploying the server remotely with --transport streamable-http (e.g. on Google Cloud Run, Hetzner, AWS), there is no authentication layer. Anyone who discovers the URL has full access to the LinkedIn scraping tools.
The current workaround is security-by-obscurity: secret URL paths, IP whitelisting, or a reverse proxy with basic auth. None of these integrate with MCP clients natively.
Meanwhile, the [MCP spec (2025-06-18)](https://modelcontextprotocol.io/docs/tutorials/security/authorization) mandates OAuth 2.1 for remote servers, and clients like claude.ai, Claude Desktop, and Claude Mobile all support OAuth-based custom connectors with Dynamic Client Registration (DCR).
Use Case
As a user deploying this server on Cloud Run alongside other MCP services, I want to add it as a custom connector in claude.ai via Settings → Connectors → Add custom connector — the same way verified connectors work. Currently this only works authless, which means the endpoint is unprotected.
Proposed Solution
Add optional OAuth 2.1 support behind a CLI flag (e.g. --auth oauth), keeping the current authless mode as default for local/stdio usage.
Minimal viable scope
Following the [MCP auth spec](https://modelcontextprotocol.io/docs/tutorials/security/authorization) and [claude.ai connector requirements](https://support.claude.com/en/articles/11503834-building-custom-connectors-via-remote-mcp-servers):
- Protected Resource Metadata at
/.well-known/oauth-protected-resource
- Authorization Server Metadata at
/.well-known/oauth-authorization-server
- Dynamic Client Registration (DCR) endpoint — required by claude.ai for custom connectors
- Authorization code flow with PKCE — standard OAuth 2.1
- Token endpoint with access token issuance
- 401 → discovery flow on unauthenticated requests
- Bearer token validation on
/mcp endpoint
Claude.ai's OAuth callback URL is https://claude.ai/api/mcp/auth_callback.
Implementation options (non-exhaustive)
-
Built-in lightweight OAuth server — embed a minimal OAuth provider in the FastMCP/Uvicorn process. Single-user scenario: one pre-configured user, tokens stored in-memory or SQLite. Keeps the "single Docker container" deployment model.
-
External auth server delegation — support pointing to an external OAuth provider (Keycloak, Auth0, Authelia) via config. Server acts as Resource Server only, validates tokens via introspection or JWKS. More complex but standard.
-
FastMCP native auth — if/when [FastMCP adds OAuth support upstream](https://github.com/modelcontextprotocol/python-sdk), leverage that directly.
What NOT to change
--transport stdio (local) stays unaffected — no auth needed
LINKEDIN_COOKIE management is orthogonal and stays as-is
containerConcurrency: 1 and session serialization unchanged
Alternatives Considered
- Bearer token via header — simpler but not MCP-spec compliant, claude.ai doesn't support static Bearer tokens for custom connectors
- mTLS — not supported by claude.ai connectors
- API key in URL path — current workaround, not discoverable by MCP clients, no standard revocation
Additional Context
Checklist (per CONTRIBUTING.md)
Code:
Tests:
Docs:
Problem
When deploying the server remotely with
--transport streamable-http(e.g. on Google Cloud Run, Hetzner, AWS), there is no authentication layer. Anyone who discovers the URL has full access to the LinkedIn scraping tools.The current workaround is security-by-obscurity: secret URL paths, IP whitelisting, or a reverse proxy with basic auth. None of these integrate with MCP clients natively.
Meanwhile, the [MCP spec (2025-06-18)](https://modelcontextprotocol.io/docs/tutorials/security/authorization) mandates OAuth 2.1 for remote servers, and clients like claude.ai, Claude Desktop, and Claude Mobile all support OAuth-based custom connectors with Dynamic Client Registration (DCR).
Use Case
As a user deploying this server on Cloud Run alongside other MCP services, I want to add it as a custom connector in claude.ai via Settings → Connectors → Add custom connector — the same way verified connectors work. Currently this only works authless, which means the endpoint is unprotected.
Proposed Solution
Add optional OAuth 2.1 support behind a CLI flag (e.g.
--auth oauth), keeping the current authless mode as default for local/stdio usage.Minimal viable scope
Following the [MCP auth spec](https://modelcontextprotocol.io/docs/tutorials/security/authorization) and [claude.ai connector requirements](https://support.claude.com/en/articles/11503834-building-custom-connectors-via-remote-mcp-servers):
/.well-known/oauth-protected-resource/.well-known/oauth-authorization-server/mcpendpointClaude.ai's OAuth callback URL is
https://claude.ai/api/mcp/auth_callback.Implementation options (non-exhaustive)
Built-in lightweight OAuth server — embed a minimal OAuth provider in the FastMCP/Uvicorn process. Single-user scenario: one pre-configured user, tokens stored in-memory or SQLite. Keeps the "single Docker container" deployment model.
External auth server delegation — support pointing to an external OAuth provider (Keycloak, Auth0, Authelia) via config. Server acts as Resource Server only, validates tokens via introspection or JWKS. More complex but standard.
FastMCP native auth — if/when [FastMCP adds OAuth support upstream](https://github.com/modelcontextprotocol/python-sdk), leverage that directly.
What NOT to change
--transport stdio(local) stays unaffected — no auth neededLINKEDIN_COOKIEmanagement is orthogonal and stays as-iscontainerConcurrency: 1and session serialization unchangedAlternatives Considered
Additional Context
Checklist (per CONTRIBUTING.md)
Code:
auth/)--auth oauth,--oauth-issuer, etc.create_mcp_server()inserver.pyTests:
Docs:
docs/docker-hub.mdmanifest.json