Inspiration

AI agents are becoming increasingly powerful, but current authorization systems are still static and scope-based. This creates a gap: agents can execute actions that technically fit within their permissions but do not align with the user’s true intent.

We wanted to solve a simple but critical question: Should an AI agent be allowed to perform an action at all?


What it does

Vulcan Guard is an intent-aware authorization layer for AI agents.

Instead of blindly issuing access tokens, it evaluates the intent and scope of each request and produces one of three outcomes:

  • ALLOW → Token is issued and the protected API is called
  • STEP-UP → Additional user confirmation is required
  • BLOCK → Request is rejected, no token issued

This ensures that token acquisition itself is gated by intelligent decision-making.


How we built it

We built a lightweight decision layer using Node.js and Express, integrated with Auth0 for token management.

The system analyzes incoming action requests and classifies them based on:

  • intent clarity
  • scope of access
  • ambiguity or contradiction

The decision result determines whether a token is issued or not.

The application is deployed on Render and provides a live interactive demo.


Challenges we ran into

One of the main challenges was designing a system that can distinguish between:

  • safe, narrow actions
  • ambiguous requests
  • overly broad or risky operations

Handling edge cases like negated instructions (e.g. “do not read”) required careful consideration to avoid unsafe authorization decisions.


Accomplishments that we're proud of

We successfully built a working intent-aware authorization layer and deployed it as a live demo.

We demonstrated that token acquisition can be dynamically controlled based on intent rather than static permissions.


What we learned

We learned that authorization for AI agents cannot rely solely on static permissions.

Security needs to move toward intent-aware and context-sensitive decision making, especially as agents become more autonomous.


What's next for Vulcan Guard

We plan to extend the decision engine with deeper semantic analysis and adaptive risk scoring.

Future versions could integrate with enterprise systems and provide real-time policy learning based on user behavior.


Bonus Blog Post

During the development of Vulcan Guard, we explored the limitations of traditional authorization systems in the context of AI agents.

Most systems rely on static permissions and predefined scopes. However, AI agents introduce a new challenge: they can generate actions dynamically, sometimes in ways that do not align with the user’s true intent.

We focused on designing a system that evaluates intent before granting access. One key challenge was handling ambiguous or negated instructions, such as “do not read” or overly broad requests like “export all data.”

Through iterative testing, we developed a decision model that categorizes actions into three outcomes: allow, step-up, and block.

This project demonstrates a shift toward intent-aware security, where access control is not only about permissions, but also about understanding the context and meaning behind each request.

Built With

Share this project:

Updates