feature: DBA Dash AI Assistant — Multi-provider LLM orchestration, SQL diagnostic tools, Windows service hosting & GUI integration#1849
Conversation
ccd550e to
920d8ac
Compare
|
Thank you very much! I'm definitely newer to this and thought I had everything sanitized, but ran Copilot to help track through and it found the entries. Thankfully there were no secrets in there. I have force pushed a sanitized update. Would you mind running the check again? |
b5ae0bb to
3f3420d
Compare
7536057 to
96e8161
Compare
|
I can't quite figure out how to get all of this removed. I have sanitized everything I can, added files to gitignore, but I still see that commit with the information. If you have any advice I will gladly take it, otherwise I suppose I should close the PR and wipe my fork and start over clean. |
7cee6a7 to
b0a2c8b
Compare
|
I just figured out how to do a squash commit and force push, hopefully that does it |
94408bc to
69fc7f3
Compare
|
I made some fixes to the way it creates the service for the AI agent, and tried to more closely align the GUI elements in the config tool and main GUI's new "AI Assistant" tab. From my testing on a machine, it all works well together. In a future iteration I would probably want to make the example categories and questions table driven from the database with an initial set populated so they aren't hard coded. |
69fc7f3 to
414800c
Compare
414800c to
eacbe33
Compare
|
Closed this because I spent a lot of time on a rewrite to align with the existing styling, table driven example questions for the AI, etc. I have a new branch and will create a PR from it. |
Summary
This PR introduces the DBA Dash AI Assistant — a self-hosted AI microservice that sits alongside DBA Dash and gives on-call DBAs a single natural-language interface for triaging SQL Server environments. It runs 10 structured SQL diagnostic tools against the DBA Dash repository database, ranks and scores the evidence, and uses an LLM (Azure OpenAI or Anthropic Claude) to produce concise, operator-ready markdown responses with immediate actions, watch list items, root-cause analysis, and a confidence rating.
No data ever leaves your environment unless you have configured an LLM endpoint — and even without one, the structured tool results are still returned to the GUI.
Architecture
The feature is split cleanly across two parts:
Request flow
POST /api/ai/askonhttp://localhost:5055.AiIntentRouterselects one or more SQL tools based on the question.SqlToolExecutorruns parameterized queries against the DBA Dash repository DB.AiEvidenceRankerscores and orders findings by severity/relevance.AiConfidenceScorerproduces a High/Medium/Low confidence rating.OpenCodeChatClientcalls the provider with a DBA-specialist system prompt + evidence JSON.New Project: DBADashAI
A new ASP.NET Core (.NET 10) project that runs as a Windows service (via
UseWindowsService()) or as a console application for local development.SQL Diagnostic Tools (10 total)
All tools implement
IAiTooland query the DBA Dash repository database:ActiveAlertsSummaryToolAgentJobAlertsToolBlockingSummaryToolDeadlocksSummaryToolSlowQueriesSummaryToolWaitsSummaryToolBackupsRiskSummaryToolDrivesRiskSummaryToolConfigDriftSummaryToolSqlToolExecutorAPI Endpoints
GET/api/ai/health{ status: "ok", utc: ... }GET/api/ai/diagnosticsGET/api/ai/test-anthropicGET/api/ai/toolsPOST/api/ai/askPOST/api/ai/proactive-digestPOST/api/ai/feedbackGET/api/ai/feedbackLLM System Prompt
The DBA-specialist system prompt (
OpenCodeChatClient.cs) instructs the LLM to:Output format enforced:
⚠️ Watch List (Next 1–4 Hours)
🔴 Immediate Actions Required
📋 Housekeeping (Next 24 Hours)
Root Cause Analysis
Prioritized Action Plan
LLM Provider Support
Three providers are supported. They are tried in fallback order unless
AI:Providerforces a specific one.1. Azure OpenAI ✅ Recommended
An Azure OpenAI resource deployed via Azure Foundry with a chat-capable model.
Required settings:
{ "AzureOpenAI": { "Endpoint": "https://.openai.azure.com", "ApiKey": "", "Deployment": "", "ApiVersion": "2024-02-15-preview" } }
All three of
Endpoint,ApiKey, andDeploymentmust be non-empty.Azure Foundry setup checklist:
appsettings.jsonunderAzureOpenAI.AI:ProvidertoAzureOpenAI.DBADashAI.dlland verify viaGET /api/ai/health.2. Anthropic Claude (direct or via Azure Foundry)
Supports both the native Anthropic API and Azure AI Foundry proxied Anthropic deployments.
Required settings:
{ "AI": { "Provider": "Anthropic" }, "Anthropic": { "BaseUrl": "https://api.anthropic.com", "ApiKey": "", "Model": "", "Version": "2023-06-01", "MaxTokens": "1024" } }
For Azure Foundry Anthropic proxy, use the base URL without
/v1/messages:{ "Anthropic": { "BaseUrl": "https://.services.ai.azure.com/anthropic/", "ApiKey": "", "Model": "" } }
The client automatically detects Foundry endpoints (
.services.ai.azure.com) and switches the auth header fromx-api-keytoapi-key.A live connectivity test endpoint is available:
GET /api/ai/test-anthropic.3. OpenCode (optional)
A self-hosted or third-party OpenAI-compatible endpoint.
{ "OpenCode": { "BaseUrl": "http://localhost:11434/v1", "ApiKey": "optional", "Model": "llama3" } }
Provider selection and fallback
AI:Providercontrols which provider is used. Leave empty for auto-fallback:{ "AI": { "Provider": "" } }
Fallback order (first fully-configured provider wins):
If no provider is configured, tool results are returned without an LLM summary (the structured evidence is still shown in the GUI JSON panel).
Full
appsettings.jsontemplate{ "ConnectionStrings": { "Repository": "Server=;Database=DBADash;Integrated Security=true;Encrypt=true;TrustServerCertificate=true;" }, "AI": { "Provider": "", "FeedbackStorePath": "", "RunbookBaseUrl": "" }, "AzureOpenAI": { "Endpoint": "", "ApiKey": "", "Deployment": "", "ApiVersion": "2024-02-15-preview" }, "Anthropic": { "BaseUrl": "", "ApiKey": "", "Model": "", "Version": "2023-06-01", "MaxTokens": "1024" }, "OpenCode": { "BaseUrl": "", "ApiKey": "", "Model": "" } }
DBADashGUI Changes
New AI Assistant Tab (
DBADashGUI/AI/AIAssistantControl.cs)(covering alerts, blocking, deadlocks, agent jobs, backups, drives, waits, config drift, slow queries, and proactive digests)
DBADashGUI/App.config:ServiceConfig Changes
New AI Service Tab (
DBADashServiceConfig/ServiceConfig.cs)DBADashAIWindows service directly from the ServiceConfig UI — no need to open an admin command prompt.app.manifest) added so the ServiceConfig tool requests elevation automatically — required for Windows service management.Documentation
Docs/AI-Assistant.md— 300+ line end-to-end guide covering:appsettings.jsonconfiguration reference for all three providersApp.configGUI settings referenceREADME.md— updated with AI Assistant section.Security Considerations
GET /api/ai/diagnosticsmasks key values as***set***Password,Pwd, andApiKeyvalues via regexappsettings.jsonwith real keys — use environment-specific config or environment variables in productionFiles Changed
43 files changed | 3,256 insertions(+) | 9 deletions(-)
New project: DBADashAI/ (entire project — ASP.NET Core, .NET 10, Windows service)
Modified: DBADashGUI/AI/AIAssistantControl.cs ← new AI tab control DBADashGUI/App.config ← AI config keys DBADashGUI/Main.cs ← tab registration DBADashServiceConfig/ServiceConfig.cs ← AI service management tab DBADashServiceConfig/app.manifest ← admin elevation request DBADashServiceConfig/*.csproj ← dependency updates
Docs: Docs/AI-Assistant.md ← new, full setup guide README.md ← AI Assistant section added
Testing
A real sample response from a live environment is included in
DBADashAI/ChatFeedback.txt— the question "Summarize current DBA risks for the next 24 hours" produced a full multi-instance analysis covering AG health events, SQL Server restarts, blocking, and acknowledgement hygiene with a confidence rating of Medium.To test locally:
ConnectionStrings:RepositoryinDBADashAI/appsettings.jsondotnet DBADashAI.dll --urls http://localhost:5055GET http://localhost:5055/api/ai/health→ expect{ "status": "ok" }GET http://localhost:5055/api/ai/diagnostics→ verify provider configDBADash.exe→ open AI Assistant tab → ask a question