Description
Langflow already ships with vLLM and vLLM Embeddings components (src/lfx/src/lfx/components/vllm/), and the vLLM icon is already registered in the frontend icon system. However, vLLM is not included in the Model Providers list (Settings → Model Providers), which means users cannot configure a vLLM server centrally and must manually enter the API base URL in every vLLM component instance.
Current Behavior
- Model Providers list includes: OpenAI, Anthropic, Google Generative AI, Ollama, Groq, Azure OpenAI, IBM WatsonX
- vLLM components exist but require manual per-component configuration
- Users cannot select vLLM models from the Language Model / Agent component dropdown
Expected Behavior
- vLLM should appear in Settings → Model Providers alongside Ollama (both are self-hosted, OpenAI-compatible servers)
- Users configure
VLLM_API_BASE (required) and optionally VLLM_API_KEY once in settings
- vLLM models are dynamically discovered from the server (like Ollama) and appear in the Language Model / Agent dropdowns
- No new LangChain dependency needed — vLLM uses the OpenAI-compatible API (
ChatOpenAI)
Why vLLM?
vLLM is one of the most popular open-source LLM serving frameworks, widely used in enterprise and research environments for self-hosted model inference. It provides an OpenAI-compatible API endpoint, making integration straightforward. Given that Langflow already has vLLM components and icons, adding it to Model Providers is a natural and low-effort improvement.
Implementation Notes
- Add
"vLLM" to LIVE_MODEL_PROVIDERS (dynamic model discovery, like Ollama)
- Add vLLM entry to
MODEL_PROVIDER_METADATA with VLLM_API_BASE and VLLM_API_KEY variables
- Add vLLM server validation in
validate_model_provider_key() — simple GET /models health check
- vLLM icon (
VllmIcon) is already registered in eagerIconImports.ts
- vLLM uses
ChatOpenAI class (already imported), so no new dependency
Files to Modify
src/lfx/src/lfx/base/models/model_metadata.py — Add to LIVE_MODEL_PROVIDERS and MODEL_PROVIDER_METADATA
src/lfx/src/lfx/base/models/unified_models/credentials.py — Add vLLM validation logic
Description
Langflow already ships with
vLLMandvLLM Embeddingscomponents (src/lfx/src/lfx/components/vllm/), and the vLLM icon is already registered in the frontend icon system. However, vLLM is not included in the Model Providers list (Settings → Model Providers), which means users cannot configure a vLLM server centrally and must manually enter the API base URL in every vLLM component instance.Current Behavior
Expected Behavior
VLLM_API_BASE(required) and optionallyVLLM_API_KEYonce in settingsChatOpenAI)Why vLLM?
vLLM is one of the most popular open-source LLM serving frameworks, widely used in enterprise and research environments for self-hosted model inference. It provides an OpenAI-compatible API endpoint, making integration straightforward. Given that Langflow already has vLLM components and icons, adding it to Model Providers is a natural and low-effort improvement.
Implementation Notes
"vLLM"toLIVE_MODEL_PROVIDERS(dynamic model discovery, like Ollama)MODEL_PROVIDER_METADATAwithVLLM_API_BASEandVLLM_API_KEYvariablesvalidate_model_provider_key()— simpleGET /modelshealth checkVllmIcon) is already registered ineagerIconImports.tsChatOpenAIclass (already imported), so no new dependencyFiles to Modify
src/lfx/src/lfx/base/models/model_metadata.py— Add toLIVE_MODEL_PROVIDERSandMODEL_PROVIDER_METADATAsrc/lfx/src/lfx/base/models/unified_models/credentials.py— Add vLLM validation logic