Inspiration
I kept thinking about how AI agents are getting more powerful but there’s no real control over what they do. Like they can read emails, send replies, trigger actions but there’s no proper system to check if that action is safe. That felt risky to me. So I wanted to build something that sits in between the AI and the real world and decides what should be allowed and what shouldn’t.
What it does
Guardian Agent is basically a control layer for AI agents. It analyzes actions like reading emails or sending replies, assigns a risk level, and then decides whether to allow it, block it, or ask for user approval. High risk actions require step up authentication using Auth0 so nothing dangerous happens without the user verifying it.
How I built it
I built the backend using Node.js and Express and connected it with APIs for agent logs and decision handling. The frontend is a dashboard built with Tailwind that shows inbox analysis, risk levels, and pending actions. Auth0 is used for authentication and step up verification. The AI part is used to classify actions and generate reasoning for each decision.
Challenges I ran into
One big challenge was making the system feel real instead of just static data. Also handling authentication flows properly with step up logic was confusing at first. Another issue was making the UI understandable because there’s a lot happening like risk scoring, permissions, logs.
Accomplishments that I'm proud of
I think the biggest thing is the full flow actually works end to end. From analyzing an email to deciding the action to enforcing approval with Auth0. Also the dashboard clearly shows what’s happening which makes it easy to demo and understand.
What I learned
I learned that building AI systems is not just about intelligence but control and safety. Also got better at structuring backend routes and handling authentication properly. UI clarity matters a lot when explaining complex systems.
What's next for Guardian Agent
I want to make it work with real time integrations like Gmail instead of demo data and improve the risk model using better AI reasoning. Also planning to extend it to other use cases like file access and automation tools.
Bonus Blog Post
Building Guardian Agent made me realize something uncomfortable about AI systems. We focus so much on what AI can do, but hardly anyone thinks about what it should be allowed to do. At first, I wanted to create a smart email assistant that could read messages and suggest actions. That part was easy. The real challenge came when I considered execution. If an AI can send emails, forward invoices, or trigger workflows, what stops it from making a costly or permanent mistake? That’s when my idea changed. Instead of building a smarter agent, I focused on creating a control layer around it. The main technical challenge was designing the risk classification and decision flow. It wasn’t just about labeling actions as low or high risk; I also needed to explain why. I had to make the system clear so a user could trust it. Integrating Auth0 for step-up authentication was another crucial piece, especially for high-risk actions like sending external emails. One thing I didn’t expect was how important the user interface would be. Clearly showing AI reasoning, confidence scores, and action states turned out to be just as vital as the backend logic. In the end, Guardian Agent is less about automation and more about controlled independence. It’s a step toward AI systems that don’t just act fast but also act responsibly.
Built With
- ai-inference-(gpt-based-classification)
- auth0-(authentication-and-step-up-verification)
- express.js
- github
- gmail-api-(integration)
- javascript
- n8n-(workflow-automation)
- node.js
- rbac/abac-policy-layer
- real-time-agent-logging-system
- rest-apis
- tailwindcss
Log in or sign up for Devpost to join the conversation.