Inspiration

AI systems are no longer passive — they are beginning to take actions in the real world.

But there is a fundamental gap:

We have built intelligence — but not control.

Most systems treat authorization as a static gate. Once access is granted, actions are implicitly trusted. This model breaks down when AI operates continuously, makes probabilistic decisions, and interacts with high-risk systems.

We set out to answer a deeper question:

“Can we design a system where every AI action is continuously justified — not just permitted?”

This led to the core philosophy behind Vergil:

$$ \boxed{ Trust(action) = Confidence \times Context^{-1} \times Control } $$

Where:

  • Confidence → how certain the AI is
  • Context → how risky or sensitive the situation is
  • Control → human and system-enforced constraints

If trust falls below a threshold, the action must escalate.


What it does

Vergil is a Zero-Trust Authorization Engine for AI systems that dynamically controls how actions are executed based on real-time conditions.

Instead of a single permission model, Vergil introduces graduated execution tiers:

  • 🟢 Autonomous Mode — safe, high-confidence actions execute instantly
  • 🟡 Delayed Mode — actions are queued with a cancellation window
  • 🟠 Verified Mode — explicit human approval required
  • 🔴 Consensus Mode — multi-party authorization for critical operations

Every action flows through a decision boundary:

$$ Execute \;\; \text{iff} \;\; Trust(action) \geq \tau $$

This transforms authorization from a binary system into a continuous trust evaluation.


How we built it

Vergil is designed as an agentic middleware layer between AI intent and real-world execution.

Execution Pipeline

$$ Intent \rightarrow Trust \; Evaluation \rightarrow Tier \; Routing \rightarrow Secure \; Execution $$


Core System Design

1. Trust Engine
Computes real-time trust score using:

  • AI confidence
  • Action severity
  • Contextual signals

2. Tiered Execution System

  • Tier 1: Immediate execution (low risk)
  • Tier 2: Redis-based delayed execution (TTL + cancel window)
  • Tier 3: Auth0 step-up authentication (human approval)
  • Tier 4: Multi-party quorum (distributed trust)

3. Security & Identity Layer

  • Auth0 OIDC for identity verification
  • Token Vault for scoped, revocable credentials

Tech Stack (Production Ready)

  • Backend: FastAPI (Python)
  • State Layer: Redis (Upstash)
  • Deployment: Google Cloud Run
  • Security: Auth0 OIDC + Token Vault

Dynamic Routing Function

$$ Tier = f(Trust(action)) $$

Lower trust → higher control tier.


Challenges we ran into

1. Modeling Trust Quantitatively

Translating abstract concepts like confidence and risk into a computable system required designing a normalized trust function.


2. Time-Based Execution

Implementing delayed execution with cancellation windows required reliable state transitions and TTL-based scheduling.


3. Distributed Approval Systems

Ensuring quorum-based approvals were atomic and tamper-resistant required careful concurrency control.


4. Avoiding Over-Engineering

Balancing flexibility with simplicity — making the system powerful without becoming opaque.


Accomplishments that we're proud of

  • 🚀 Created a continuous trust-based authorization model
  • 🧠 Designed a signature equation defining AI trust boundaries
  • 🕵️ Built a multi-tier execution system with real-time control
  • 👥 Implemented multi-party consensus for critical actions
  • 🔐 Integrated Auth0 as a true enforcement layer, not just authentication

We demonstrated that:

$$ Control \Rightarrow Predictability \Rightarrow Trust $$


What we learned

  • Authorization must evolve from static permissions to dynamic evaluation
  • AI confidence alone is insufficient without contextual awareness
  • Human trust scales better through distribution (quorum) than centralization
  • Systems must be explainable to be adoptable

A key takeaway:

$$ Intelligence - Governance = Uncontrolled \; Risk $$


What's next for Vergil

Vergil becomes a foundation for AI Governance Infrastructure.

Future directions:

  • 🧬 Multi-agent ecosystems with isolated trust domains
  • 🏥 Healthcare systems requiring layered authorization
  • 💰 Financial operations with enforced consensus
  • 🌐 SDK for integrating trust-based execution into any AI system

Long-term vision:

$$ AI \rightarrow Governed \rightarrow Trusted \rightarrow Autonomous \; at \; Scale $$

Vergil aims to ensure that as AI systems grow more powerful, they also become accountable, controllable, and safe.


Blog Post

While building Vergil, the biggest shift was not technical — it was conceptual.

Initially, we approached the system like a traditional backend problem: define APIs, manage permissions, and enforce access control. But as soon as we introduced autonomous decision-making, the entire model broke down. Static permissions were simply not expressive enough to handle real-world uncertainty.

The turning point came when we reframed the problem around trust instead of permission. Instead of asking “Is this allowed?”, we started asking “Should this happen right now, under these conditions?” That single shift changed the entire architecture.

Designing the trust function was one of the most challenging parts. It required combining subjective AI confidence with objective system risk in a way that could be computed, compared, and acted upon in real time. Once that was in place, the tiered system emerged naturally — low-risk actions flowed freely, while high-risk actions required progressively stronger guarantees.

Another major learning was around human involvement. Traditional systems rely on a single approval step, but that creates a single point of failure. By introducing quorum-based approval, we distributed trust across multiple actors, making the system significantly more resilient.

Finally, integrating Auth0’s Token Vault as an enforcement layer — rather than just an authentication provider — allowed us to anchor the entire system in verifiable identity and scoped access.

Vergil ultimately represents a shift from permission-based systems to trust-governed systems, where every action is evaluated, justified, and controlled before it is executed.

Built With

Share this project:

Updates