About the project
Most AI agents today feel too powerful in the wrong way: they can do useful things, but they often require broad, long-lived access to a user’s accounts. That tradeoff never felt right to me. I wanted to build an agent that could be genuinely helpful without becoming a black box with permanent access to someone’s digital life.
That idea became Sanctum: an AI agent that can read from tools like Gmail, GitHub, and Notion, but does so through Auth0 Token Vault instead of directly handling user credentials. The core principle behind the project is simple: the agent cannot directly access tokens. It only requests scoped access at the moment it needs to perform a specific action, and Auth0 handles the hard parts of identity, consent, and token lifecycle management.
What inspired me
I was inspired by the gap between what AI agents can do and what users are actually willing to trust them with. People want assistants that can summarize emails, inspect issues, draft replies, and help with workflows, but they also want clear boundaries.
I kept coming back to three questions:
- How do we make sure the agent never becomes the owner of a user’s credentials?
- How do we make permissions understandable instead of invisible?
- How do we stop high-risk actions from happening silently?
Those questions shaped the whole project.
What I built
Sanctum is a secure, user-controlled personal agent with three core guarantees:
- The agent cannot directly access tokens or store credentials itself.
- The user can see, understand, and revoke permissions clearly from the dashboard.
- Risky actions pause for approval and re-authentication before they are executed.
The app lets users connect services like Gmail, GitHub, and Notion through Auth0. Once connected, Sanctum can retrieve data using Token Vault-backed access, index that data into a private per-user retrieval store, and answer questions grounded in the user’s own context.
For actions that could change data, like sending an email or posting a GitHub comment, Sanctum does not act immediately. Instead, it stages the action, shows the user exactly what is about to happen, and requires explicit approval. If the session is no longer fresh, the user must re-authenticate before the action can continue.
How I built it
I built the project with:
- Next.js 16
- Auth0 for AI Agents Token Vault
- Anthropic Claude for reasoning
- Local embeddings with
@xenova/transformers - A private per-user retrieval layer
- A permission dashboard and approval workflow for agent actions
The architecture is centered around Auth0:
- The user signs in and connects external services.
- Auth0 stores federated tokens in Token Vault.
- Sanctum requests a scoped token only when it needs to read or act.
- Retrieved data is indexed in a user-isolated store.
- The chat agent uses that context to answer questions.
- Write actions are staged and gated behind approval plus step-up authentication.
In a way, the project is about reducing the agent’s authority surface. Instead of assuming “agent = trusted operator,” I designed around:
$$ \text{Agent power} = \text{minimum necessary scope} + \text{explicit user consent} $$
Challenges I faced
The biggest challenge was making the security model feel real, not just described in documentation.
A few specific challenges:
Designing around token absence
I had to build the flow so the app worked even though the agent never owns credentials directly. That meant thinking carefully about when tokens are requested, how failures are handled, and how to keep the UX smooth.Making permissions visible to users
It’s easy to say “user-controlled,” but harder to present permissions in a way that users can actually understand. I wanted connected services, revocation, and action boundaries to be obvious in the interface.Separating read and write trust levels
Reading context is one kind of permission. Acting on behalf of a user is another. A major challenge was creating a staged approval flow where risky actions pause for review and can require fresh authentication before execution.Keeping the project production-aware
For a hackathon prototype, it is tempting to stop at a demo. I wanted the project to reflect real product thinking around auditability, revocation, isolation, and explicit action boundaries.
What I learned
This project taught me that building secure AI agents is not just about model quality. It is about identity, consent, and control.
I learned that the most important part of an agent system may not be the model prompt or tool call. It may be the layer that answers:
- Who granted this permission?
- What exactly is the agent allowed to do?
- Can the user revoke that access immediately?
- Does this action deserve another confirmation step?
I also learned how powerful Auth0 Token Vault is as a primitive for agent systems. It changes the relationship between the app and the credential: the app no longer has to be the place where trust accumulates.
Why this matters
Sanctum is my attempt to show that agentic software does not have to choose between usefulness and safety. An agent can be powerful because it has boundaries, not in spite of them.
That is the future I wanted to prototype: an agent that is authorized to act, but only within lines the user can see, understand, and control.
Bonus Blog Post
Building Sanctum taught me that the hardest part of agentic AI is not the model, it is trust. It is easy to make an assistant that can summarize data or draft an action. It is much harder to build one that can act on a user’s behalf without turning into a security risk. That challenge is what pushed me toward Auth0 for AI Agents and Token Vault.
My goal with Sanctum was to build an AI agent that could read from connected services like Gmail and GitHub, but never directly own long-lived user credentials. Instead of storing provider tokens inside the app, I used Token Vault as the secure boundary. The agent requests access only when it needs to fetch data or prepare an action. That design ended up shaping the whole product: connected accounts, per-user indexing, approval-gated write actions, and revocation all became first-class features rather than afterthoughts.
The most interesting technical hurdle was understanding the difference between ordinary social login and Connected Accounts for Token Vault. At first, I treated provider connections like normal identities, which caused confusing issues around token retrieval and account state. Once I aligned the project with Auth0’s Connected Accounts flow, the architecture became much clearer. I also ran into real-world issues that made the security story stronger: handling provider-specific refresh-token behavior, making step-up authentication work correctly for risky actions, and avoiding misleading UI states when indexing and chat ran across separate serverless instances.
What I’m most proud of is that Sanctum does not present security as invisible plumbing. The user can see what is connected, revoke access, index only their own data, and explicitly approve high-risk actions. Token Vault was not just a backend integration in this project. It became the foundation for a more honest and controllable agent experience.
Log in or sign up for Devpost to join the conversation.