# From a clean directory (Windows PowerShell)
python -m venv .venv; .\.venv\Scripts\Activate.ps1
pip install --upgrade pip
pip install aetherra # (Placeholder: once published to PyPI)Until PyPI publish, clone repo:
git clone https://github.com/AetherraLabs/Aetherra.git
cd "Aetherra"
python -m venv .venv; .\.venv\Scripts\Activate.ps1
pip install -r requirements.txt # or: pip install -e .$env:AETHERRA_AI_API_ENABLED='1'; $env:AETHERRA_AI_API_STREAM='1'; python -m aetherra_hub.compatIn another shell:
Invoke-RestMethod -Method Post -Uri 'http://localhost:3001/api/lyrixa/chat' -ContentType 'application/json' -Body '{"message":"hello"}'docker build -t aetherra-dev .
docker run -p 3001:3001 -e AETHERRA_AI_API_ENABLED=1 -e AETHERRA_AI_API_STREAM=1 aetherra-dev python -m aetherra_hub.compat| File | Purpose |
|---|---|
workflows/parallel_workflow_demo.aether |
Demonstrates parallel execution chain. |
workflows/on_error_chain_demo.aether |
Error handling / fallback demonstration. |
workflows/plugin_chain_demo.aether |
Plugin chain execution showcase. |
ai_os_test.aether |
End‑to‑end OS capability script. |
Run a workflow:
python aether.py workflows/parallel_workflow_demo.aether| Flag | Effect | Default |
|---|---|---|
AETHERRA_AI_API_ENABLED |
Enables chat endpoints | 0 |
AETHERRA_AI_API_STREAM |
Enables streaming endpoint | 0 |
AETHERRA_TRAINER_ENABLED |
Enables trainer job/eval routes | 0 |
AETHERRA_QUIET |
Suppress verbose logs | 0 |
AETHERRA_MEMORY_STORM |
Enable STORM memory features | 0 |
AETHERRA_STORM_SHADOW_MODE |
STORM runs in shadow mode (metrics only) | 1 |
Capability tests:
pytest -q -o addopts= tests/capabilitiesDiagnostics (non-fatal warnings tolerated in some external modes):
python tools/lyrixa_diagnostics.pydeactivate # if venv
Remove-Item -Recurse -Force .venv- Explore metrics at
http://localhost:3001/metrics - Inspect
BETA_READINESS_REPORT.md - Sign workflows:
python tools/sign_aether.py workflows/parallel_workflow_demo.aether
Recommended safe production validation path: STORM collects metrics while returning baseline results.
# 1) Start the Hub (recommended)
python tools/run_hub_ai_api.py --port 3001
# 2) In another shell, enable STORM shadow mode and start the OS
$env:AETHERRA_MEMORY_STORM='1'; $env:AETHERRA_STORM_SHADOW_MODE='1'; python aetherra_os_launcher.py --mode full -vOn boot you should see a STORM status line in logs similar to:
[STORM:POST-BOOT] enabled=1 shadow_mode=1 backend=ot:earthmover tt_rank_cap=128