Skip to content

feat: add vLLM as a default Model Provider #12771

@YeonghyeonKO

Description

@YeonghyeonKO

Description

Langflow already ships with vLLM and vLLM Embeddings components (src/lfx/src/lfx/components/vllm/), and the vLLM icon is already registered in the frontend icon system. However, vLLM is not included in the Model Providers list (Settings → Model Providers), which means users cannot configure a vLLM server centrally and must manually enter the API base URL in every vLLM component instance.

Current Behavior

  • Model Providers list includes: OpenAI, Anthropic, Google Generative AI, Ollama, Groq, Azure OpenAI, IBM WatsonX
  • vLLM components exist but require manual per-component configuration
  • Users cannot select vLLM models from the Language Model / Agent component dropdown

Expected Behavior

  • vLLM should appear in Settings → Model Providers alongside Ollama (both are self-hosted, OpenAI-compatible servers)
  • Users configure VLLM_API_BASE (required) and optionally VLLM_API_KEY once in settings
  • vLLM models are dynamically discovered from the server (like Ollama) and appear in the Language Model / Agent dropdowns
  • No new LangChain dependency needed — vLLM uses the OpenAI-compatible API (ChatOpenAI)

Why vLLM?

vLLM is one of the most popular open-source LLM serving frameworks, widely used in enterprise and research environments for self-hosted model inference. It provides an OpenAI-compatible API endpoint, making integration straightforward. Given that Langflow already has vLLM components and icons, adding it to Model Providers is a natural and low-effort improvement.

Implementation Notes

  • Add "vLLM" to LIVE_MODEL_PROVIDERS (dynamic model discovery, like Ollama)
  • Add vLLM entry to MODEL_PROVIDER_METADATA with VLLM_API_BASE and VLLM_API_KEY variables
  • Add vLLM server validation in validate_model_provider_key() — simple GET /models health check
  • vLLM icon (VllmIcon) is already registered in eagerIconImports.ts
  • vLLM uses ChatOpenAI class (already imported), so no new dependency

Files to Modify

  1. src/lfx/src/lfx/base/models/model_metadata.py — Add to LIVE_MODEL_PROVIDERS and MODEL_PROVIDER_METADATA
  2. src/lfx/src/lfx/base/models/unified_models/credentials.py — Add vLLM validation logic

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions