-
Notifications
You must be signed in to change notification settings - Fork 0
Frequently Asked Questions
Blazor Data Orchestrator is an open-source distributed job orchestration platform built on .NET Aspire and Blazor Server. It lets you create, schedule, and run automated jobs written in C# or Python through a web-based interface with an in-browser Monaco code editor and AI-assisted development. It fills the gap between simple scheduled tasks and enterprise-grade orchestration platforms.
Azure Functions with Timer Triggers work well for basic scheduled tasks, but they break down when you need job grouping, parameterization, execution history, horizontal scaling across multiple queues, and a unified management UI. Blazor Data Orchestrator provides all of these on top of Azure services you already know — without the operational overhead of enterprise platforms like Azure Data Factory or Apache Airflow.
Azure Data Factory and Apache Airflow are powerful platforms, but they carry significant operational overhead and learning curves that many teams cannot justify for internal automation workloads. Blazor Data Orchestrator is a lightweight, self-hosted alternative built on .NET Aspire and Azure Storage — you can clone the repo and deploy to Azure Container Apps with a single azd up command.
- .NET 10 with Blazor Server for the web UI
- .NET Aspire for service orchestration
- Azure Storage (Blob, Queue, Table) for packages, messaging, and logs
- SQL Server for job metadata and configuration
- Monaco Editor for in-browser code editing
- Roslyn for C# compilation
- Radzen for UI components
No. Aspire automatically starts an Azurite container (Azure Storage emulator) and a SQL Server container for local development. No Azure subscription is required to develop and test locally.
Run azd up from the repository root. This single command provisions all required Azure resources (Azure SQL, Storage Account, Container Registry, Container Apps Environment), builds and containerizes all services, and deploys them to Azure Container Apps. See the Deployment guide for details.
- Ensure Docker Desktop (or Podman) is running.
- Verify the .NET 10 SDK is installed:
dotnet --version - Run
dotnet workload restorefrom the solution root. - Check that ports 14330, 10000, 10001, and 10002 are not in use by other applications.
- Review the terminal output from
aspire runfor error messages.
No. The legacy aspire workload is obsolete. Run dotnet workload restore from the solution root — this restores only the workloads the solution requires.
Yes. Update the connection string in the Web, Scheduler, and Agent appsettings.json files to point to your SQL Server instance. The Install Wizard will create the required database schema on first launch.
Clear your browser cache and navigate to the root URL of the web application. The wizard appears when the application detects that the database schema has not been created.
Add dependencies in the .nuspec file within the Code Tab editor:
<dependencies>
<dependency id="Newtonsoft.Json" version="13.0.3" />
</dependencies>Alternatively, use CS-Script syntax at the top of your .cs file:
//css_nuget Newtonsoft.JsonDependencies are resolved automatically via dotnet restore at compilation and execution time.
The entry point is the BlazorDataOrchestratorJob.ExecuteJob() static method in main.cs.
The entry point is the execute_job() function in main.py.
Yes. In the online editor, you can add additional .cs or .py files alongside main.cs/main.py. All files are packaged together in the .nupkg.
Jobs can include appsettings.json, appsettingsProduction.json, and appsettingsStaging.json. The Agent loads the appropriate file based on the job's JobEnvironment setting and merges in connection strings from its own configuration.
Click Run Job Now on the Details tab or Code tab of the Job Details dialog. This compiles (if in Code Edit mode), packages, uploads, and queues the job for immediate execution.
- Open the Job Details dialog.
- Navigate to the Schedules tab.
- Click Add Schedule.
- Configure days of the week, start/stop time, and run interval.
- Enable the schedule.
Yes. Enable the webhook on the Webhook tab in Job Details. The displayed URL (/webhook/{GUID}) accepts HTTP GET and POST requests. Query parameters are forwarded to the job execution context.
Deploy multiple Agent instances or replicas. Each agent monitors a specific queue (configured via the QueueName setting). You can:
- Deploy multiple replicas of the same agent for horizontal scaling on a single queue.
- Deploy separate agents with different
QueueNamevalues to create dedicated processing pools.
Job execution logs are stored in Azure Table Storage in the JobLogs table. You can view them in the Logs tab of the Job Details dialog.
The compilation error dialog shows the file name, line number, and error description. Common issues:
- Missing NuGet dependency — add it to the
.nuspecfile. - Incorrect class or method name — the entry point must be
BlazorDataOrchestratorJob.ExecuteJob(). - Syntax errors — check the line number referenced in the error message.
- Check the queue name in the agent's
appsettings.json— it must match the queue assigned to the job. - Verify the Azurite or Azure Storage Queue service is running.
- Check the agent logs in the Aspire dashboard.
- Ensure the job is enabled and has been queued (visible in the home page table).
This typically indicates an agent crash during execution. The message visibility timeout expires (default: 5 minutes), making the message visible to another agent. The visibility timeout renewal (every 3 minutes) normally prevents this, but if the agent process terminates unexpectedly, the message will be reprocessed.
- Navigate to Administration > Settings.
- Select your AI provider.
- Enter your API key and endpoint.
- Select a model (e.g.,
gpt-4). - The AI button appears in the Code Tab editor toolbar.
Back to Home