Summary
Add a new AI / Local LLM scan category covering default ports used by popular local AI inference servers. These services are increasingly deployed on developer workstations and internal networks—often with no authentication—making them high-value targets during assessments.
Proposed ports
| Service |
Port(s) |
Protocol |
| Ollama |
11434 |
HTTP |
| LM Studio |
1234 |
HTTP |
| llama.cpp server |
8080 |
HTTP |
| text-generation-webui (Gradio UI) |
7860 |
HTTP |
| text-generation-webui (API) |
5000 |
HTTP |
| vLLM |
8000 |
HTTP |
| Jan |
1337 |
HTTP |
| Open WebUI |
8080 / 3000 |
HTTP |
| KoboldCpp |
5001 |
HTTP |
| Tabby |
8080 |
HTTP |
Proposed changes
SERVICE_CATEGORIES — new AI / Local LLM key with all ports above
EXTERNAL_SENSITIVE_PORTS — add 11434, 1234, 7860, 5001, and 1337 at HIGH severity (ports already covered by Web findings rules are skipped)
Usage
{ "scan_categories": ["AI / Local LLM"] }
Summary
Add a new
AI / Local LLMscan category covering default ports used by popular local AI inference servers. These services are increasingly deployed on developer workstations and internal networks—often with no authentication—making them high-value targets during assessments.Proposed ports
Proposed changes
SERVICE_CATEGORIES— newAI / Local LLMkey with all ports aboveEXTERNAL_SENSITIVE_PORTS— add11434,1234,7860,5001, and1337atHIGHseverity (ports already covered byWebfindings rules are skipped)Usage
{ "scan_categories": ["AI / Local LLM"] }