Inspiration

In the modern cloud landscape, AI Agents are our greatest force multipliers—but they are also our greatest security liabilities. The current paradigm often requires granting an LLM permanent, high-privilege credentials to private infrastructure. This creates a catastrophic blast radius if the agent is compromised or hallucinates. We built Aegis Sentinel to solve this "Agentic Access Problem."

What it does

Aegis Sentinel is an autonomous cloud security auditor that identifies vulnerabilities like exposed S3 buckets or hardcoded keys. It evaluates risk using the following Zero-Trust logic:

$$Remediation(v) = \begin{cases} \text{BLOCK}, & \text{if } \text{Auth}(u) = 0 \ \text{EXECUTE}(v, t_{vault}), & \text{if } \text{Auth}(u) = 1 \end{cases}$$

The AI agent can "see" threats, but it is strictly blocked from "touching" them until a human administrator completes a Step-Up Authentication challenge via Auth0. Only then is a scoped token released from the Auth0 Token Vault to execute the fix.

How we built it

  • Intelligence Layer: Powered by Gemini 1.5 Flash for high-speed threat analysis and blast-radius evaluation.
  • Identity & Vaulting: Auth0 for AI Agents handles the entire session lifecycle, including the secure storage and release of GitHub PATs via the Token Vault.
  • Backend: A Node.js/Express orchestration layer that manages the handshake between Gemini and the Auth0 gateway.
  • UI: A premium, Glassmorphic Dashboard built with vanilla HTML/CSS to ensure a lightweight and visually stunning experience.

Challenges we ran into

The primary hurdle was state synchronization. We had to "pause" the AI's internal state machine while the user was redirected to the Auth0 domain for MFA, ensuring that the context wasn't lost when the user returned to the dashboard to authorize the remediation.

Accomplishments that we're proud of

  • Successfully implemented a true Zero-Trust boundary where the LLM never sees the raw API keys it uses to fix vulnerabilities.
  • Created a real-time Terminal Audit Log that provides an immutable trail of who authorized what AI action and when.
  • Developed a high-fidelity UI that proves security tools can have a "premium" enterprise feel without using heavy frameworks.

What we learned

We learned that Identity is the new perimeter for AI. By focusing on Asynchronous Authorization, we found a way to harness the speed of autonomous AI remediation without sacrificing the safety of human-in-the-loop boundaries.

What's next for Aegis Sentinel

We plan to expand the Auth0 Token Vault integration to support multi-cloud environments (AWS/Azure/GCP) and implement Fine-Grained Authorization (FGA) to restrict agents to specific resource-level permissions based on the user's Auth0 organizational role.

✍️ Bonus Blog Post: The Road to Zero-Trust AI

Building Aegis Sentinel wasn't just a technical exercise; it was an exploration of a fundamental shift in how we trust autonomous systems. When we first sat down to tackle the "Authorized to Act" challenge, our biggest hurdle wasn't the AI—it was the blast radius. We realized that the more capable an AI Agent becomes, the more dangerous it is to hold permanent, "hot" credentials.

Our technical journey reached a turning point when we integrated the Auth0 Token Vault. The "Aha!" moment came when we realized we could separate the agent's intelligence from its authority. By vaulting our GitHub PATs, we ensured that the Gemini agent could propose a code fix, but it was effectively "armless" until a human verified their identity via a Step-Up challenge. This "Asynchronous Authorization" flow was the most difficult part to synchronize, as it required maintaining the AI's auditing context while the user was redirected to the Auth0 Universal Login domain.

We are incredibly proud of the Glassmorphic UI we built to visualize this flow. Seeing the "Token Released" log hit the terminal after a successful MFA check feels like the future of secure DevOps. We’ve moved from a world of "Permanent Access" to "Just-in-Time Authority." This project proved to us that secure AI remediation isn't a tradeoff between speed and safety—it’s about having a vault that only opens when the right person is in the room.

Built With

Share this project:

Updates