Inspiration
As AI systems increasingly write and modify production code, we realized that software change itself has become a governance problem, not just an engineering one. Most CI pipelines rely on deterministic rules: they either pass or fail changes blindly. That works until policies conflict, intent is ambiguous, or risk tradeoffs require judgment.
Beyond structural changes, we observed that many production incidents are caused not by broken types, but by subtle changes in execution flow — logic reordered, conditions widened, or control paths altered in ways that are technically valid but behaviorally unsafe.
In real organizations, these moments are resolved by humans — through precedent, context, and authority. We were inspired to build a system where AI doesn’t replace that process, but knows when it must stop and defer to human authority.
What it does
Dotto is an AI-powered change-control governor for modern software delivery.
It analyzes TypeScript schemas and change artifacts to detect breaking and semantic drift, compute downstream blast radius, and evaluate changes against organizational policy.
When deterministic rules are sufficient, changes proceed automatically. When they are not — due to policy conflicts, intent ambiguity, precedent uncertainty, or high-risk tradeoffs — Dotto halts deployment and requires a human ruling.
Every human decision is recorded as binding precedent, enabling future governance decisions to reason from historical context rather than starting from scratch, using deterministic similarity matching rather than embeddings.
How we built it
We built Dotto as a layered governance system:
Deterministic analysis scans TypeScript schemas and change artifacts to:
- build dependency graphs
- detect breaking and semantic drift
- compute downstream impact
These analyses are materialized into structured artifacts (e.g.
graph.json,drift.json,impact.json,intent.json).Gemini 3 reasons over these structured artifacts rather than raw source files, focusing on judgment under uncertainty:
- whether declared intent matches observed change
- whether policy boundaries are crossed
- whether sufficient precedent exists to automate a decision
For this hackathon, we demonstrate Dotto through an interactive analysis interface using real governance artifacts; CI/CD integration is a designed extension rather than a fully packaged component.
Human overrides are captured in
decisions.jsonand stored as binding precedent, forming an organizational memory that Gemini reasons from in subsequent runs.Human decisions produce cryptographically signed authorization receipts, which are enforced by the deployment gate and recorded as immutable evidence.
This architecture avoids hallucination risk while preserving nuanced reasoning where deterministic systems fail.
Challenges we ran into
The hardest challenge was defining the boundary of AI authority.
We had to be extremely disciplined about what Gemini is allowed to decide, and more importantly, when it must refuse to decide — especially in cases where:
- structural changes appear safe
- but behavior may diverge from past precedent in ways that cannot be deterministically verified
Designing a system where escalation is a correct and expected outcome — not a failure — required rethinking conventional AI workflows.
Another challenge was making governance legible in seconds. Judges and users needed to instantly understand why automation stopped, whether due to policy, intent mismatch, or flow-level behavioral uncertainty, without reading a whitepaper.
Accomplishments that we're proud of
- Treating “no automated path forward exists” as a first-class, intentional outcome.
- Governing not just what changed, but how the system now behaves.
- Designing a UI that communicates authority, evidence, and final human control immediately.
- Using Gemini 3 specifically for judgment under uncertainty, not deterministic analysis.
- Creating an auditable decision trail where human rulings become enforceable precedent.
- Enforcing governance through signed receipts rather than advisory logs or explanations.
What we learned
We learned that trust in AI systems doesn’t come from confidence — it comes from restraint.
The most important capability of a governance AI is not deciding more, but knowing when it is not allowed to decide. This is especially critical for changes that preserve types and interfaces while introducing behavioral uncertainty.
Treating human authority as sovereign makes the system safer, more credible, and more deployable in real organizations.
What's next for Dotto
Next, we plan to expand Dotto’s governance model to:
- behavioral and flow-level change analysis across functions and services
- parameter- and threshold-level change governance
- AI-generated code changes with mandatory human ratification
- additional schema formats (including OpenAPI)
- regulated domains where auditability and human accountability are mandatory
Our goal is to make governed change the default — enabling teams to move fast without sacrificing safety, accountability, or institutional memory.
Built With
- chatgpt
- claude
- gemini-3-api
- netlify
- node.js
- react
- render
- typescript
- vite
- windsurf
Log in or sign up for Devpost to join the conversation.