Skip to content

docs(local-models): add Rapid-MLX (Apple Silicon) provider page#1722

Open
raullenchai wants to merge 1 commit intoopeninterpreter:mainfrom
raullenchai:docs/rapid-mlx-local-provider
Open

docs(local-models): add Rapid-MLX (Apple Silicon) provider page#1722
raullenchai wants to merge 1 commit intoopeninterpreter:mainfrom
raullenchai:docs/rapid-mlx-local-provider

Conversation

@raullenchai
Copy link
Copy Markdown

Summary

Adds a new local-model docs page for Rapid-MLX,
an OpenAI-compatible inference server for Apple Silicon built on Apple's
MLX framework, with streaming and native tool calling.

Why

The current local-model pages cover Ollama, LM Studio, llamafile, and
Jan.ai, but none of them are MLX-native. Rapid-MLX fills this gap for
Apple Silicon users — it's pip-installable, exposes the standard
http://localhost:8000/v1 chat completions API, and works with the
existing --api_base flag, so no Open Interpreter code changes are
needed.

Files

  • New: docs/language-models/local-models/rapid-mlx.mdx (modeled on
    lm-studio.mdx)
  • Updated: docs/mint.json (registers the new page in the
    Local Providers nav group, between LM Studio and Custom Endpoint)

Scope

Docs only. No code, no API changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant