MIDAS is an open platform for governing execution authority at decision surfaces across agents, AI systems, and enterprise workflows.
-
Updated
Apr 18, 2026 - Go
MIDAS is an open platform for governing execution authority at decision surfaces across agents, AI systems, and enterprise workflows.
A long-form article introducing the Twin Test: a practical standard for high-stakes machine learning where models must show nearest “twin” examples, neighborhood tightness, mixed-vs-homogeneous evidence, and “no reliable twins” abstention. Argues similarity and evidence packets beat probability scores for trust and safety.
A long-form article and practical framework for designing machine learning systems that warn instead of decide. Covers regimes vs decimals, levers over labels, reversible alerts, anti-coercion UI patterns, auditability, and the “Warning Card” template, so ML preserves human agency while staying useful under uncertainty.
Longform article reframing abstention (reject option / selective prediction) as product design, not model weakness. Covers coverage as a KPI, calibration as a prerequisite, threshold selection under review capacity and risk, queue/UX design for human-in-the-loop workflows, and anti-patterns that break safety in production.
AI/ML Engineer — Decision Ops • LLM Observability • GenAI/RAG Systems
Deterministic governance system for AI-driven marketing that separates diagnostics, human reasoning, and execution into strictly controlled layers.
Event-driven NLP governance architecture using FastStream, Redpanda, and PostgreSQL with auditability, human-in-the-loop control, and ethical safeguards.
Stock redistribution and fairness-based transfer recommendations (Excel/VBA prototype).
Turn-based control architectures
A governed system for translating applied AI research into auditable, decision-ready artifacts.
Research repository by Xufen Tu exploring human judgment, decision architecture, and responsibility structures in complex AI-mediated systems.
Defines the Selection Layer — the decision system through which AI models determine visibility, inclusion, and recommendation.
Control-plane architecture for AI & agentic systems: governance as admission control, decision admissibility, and audit-grade evidence.
CFS (Cognitive Flow System) — a causal influence framework for modeling how decisions emerge in complex systems through structured causal constraints. DOI: https://doi.org/10.5281/zenodo.19142077 DOI: https://doi.org/10.5281/zenodo.19103972
Open-source framework for Decision Traces in complex decision systems. Providing a verifiable audit trail for observable, explainable, and humane choices in software and agentic engineering
SCE Core is a research prototype for state-evolution computation: a system that models data as evolving states under constraints, enabling explainable reasoning, stability-based selection, and adaptive decision systems.
AI-assisted evidence review workflow for regulated financial services, featuring structured claim extraction, evidence sufficiency scoring, contradiction detection, audit-ready traceability, human-in-the-loop review routing, and evaluation-driven safeguards.
Optimization-driven product selection for commercial buying decisions under budget and business constraints.
Dietary Destabilization Triangle assessment tool
AI-powered logistics decision system with operational risk modeling and interactive Streamlit dashboard
Add a description, image, and links to the decision-systems topic page so that developers can more easily learn about it.
To associate your repository with the decision-systems topic, visit your repo's landing page and select "manage topics."