Skip to content

feat: add AI / Local LLM scan category #14

@bandrel

Description

@bandrel

Summary

Add a new AI / Local LLM scan category covering default ports used by popular local AI inference servers. These services are increasingly deployed on developer workstations and internal networks—often with no authentication—making them high-value targets during assessments.

Proposed ports

Service Port(s) Protocol
Ollama 11434 HTTP
LM Studio 1234 HTTP
llama.cpp server 8080 HTTP
text-generation-webui (Gradio UI) 7860 HTTP
text-generation-webui (API) 5000 HTTP
vLLM 8000 HTTP
Jan 1337 HTTP
Open WebUI 8080 / 3000 HTTP
KoboldCpp 5001 HTTP
Tabby 8080 HTTP

Proposed changes

  • SERVICE_CATEGORIES — new AI / Local LLM key with all ports above
  • EXTERNAL_SENSITIVE_PORTS — add 11434, 1234, 7860, 5001, and 1337 at HIGH severity (ports already covered by Web findings rules are skipped)

Usage

{ "scan_categories": ["AI / Local LLM"] }

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions