Inspiration
When you hook an AI agent up to your Google Calendar or GitHub, it gets full access. There's no way to tell it "you can read my calendar but don't you dare create events" or "list my repos but stay away from my pull requests." I wanted to fix that. Users should have real, enforceable control over what agents do on their behalf.
What it does
AgentGate lets users write Permission Contracts, which are behavioral agreements that an AI agent has to follow at runtime.
You configure permissions through a visual Contract Builder with three levels per action: Allowed, Requires Approval, or Denied. Once you sign, the contract gets hashed with SHA-256 and versioned. From that point on, every single tool call the agent tries to make passes through a Contract Guard that checks it against your signed contract before anything executes. If something's blocked, it gets stopped right there and logged to a live audit trail.
Here's what that looks like in practice with the default contract:
- "What's on my calendar today?" works fine, agent fetches your events
- "Create a meeting for tomorrow" gets blocked, a red violation card shows up in the chat
- "Post a comment on PR #23" triggers the approval flow, the agent shows you a preview and waits for permission
- "Merge PR #21" is flat out denied, blocked and logged
How we built it
Auth0 for AI Agents handles authentication and Token Vault manages the OAuth tokens for Google Calendar and GitHub. On top of that sits the Contract Guard middleware, which wraps every tool with enforcement logic before the LLM can execute anything. The chat layer uses Vercel AI SDK v6 with streaming and tool call interception. The frontend is Next.js 16 with App Router, React 19, and Tailwind CSS v4. Contracts are signed with SHA-256 and stored per user with versioning and full audit logging.
The key architectural decision was making sure the contract check happens before any API call. The agent never gets a chance to break the rules because the guard replaces the tool's execute function with enforcement logic while keeping the schema intact so the model still knows what tools exist.
Challenges we ran into
AI SDK v6 had breaking changes from v5 that weren't well documented. parameters became inputSchema, maxSteps got removed entirely, and the tool result structure changed. Figuring that out ate a lot of time.
Next.js 16 renamed middleware.ts to proxy.ts with a different export name. The Auth0 SDK docs still referenced the old convention so we had to dig through the Next.js source docs to figure out what changed.
The hardest part was designing the Contract Guard to intercept tool calls without breaking the streaming response flow. You can't just throw an error when a tool is denied because that kills the whole stream. Instead we had to replace the execute function per tool so denied calls return structured violation objects that the chat UI can render inline.
Accomplishments we're proud of
The enforcement layer works without any changes to the AI SDK or Auth0 SDK. It's a pure middleware pattern that any developer could drop into their own project.
Contract violations show up as inline cards in the chat (red for denied, amber for needs approval) instead of error modals, which makes the whole thing feel like the agent is actually respecting your boundaries rather than crashing.
The audit dashboard polls and updates live so you can watch every guard decision as it happens. And adding a new service is just a matter of adding rules to the contract type and tool mappings to the guard.
What we learned
Auth0's Token Vault is solid for managing third party OAuth tokens. But the real gap in AI agent security isn't authentication, it's behavioral authorization. Users need to define what an agent can do, not just which services it can access. OAuth scopes say "this app can access your calendar." Permission Contracts say "this agent can read your calendar but cannot create events." That's a different problem and nobody was solving it.
What's next for AgentGate
- Persistent contract storage backed by a real database
- Multi party contracts where team admins set org wide policies
- Contract templates for common setups (read only analyst, full access developer, etc.)
- A real approval flow with user confirmation before execution
- Publishing the Contract Guard as a standalone middleware package
Bonus Blog Post
Building AgentGate made me rethink what "authorization" actually means when an AI agent is involved. Going in, I figured the hard part would be OAuth plumbing: getting tokens, managing scopes, handling refreshes. Auth0's Token Vault took care of all that in about 20 lines of config. The actual hard problem turned out to be something I hadn't seen anyone tackle yet: letting users define behavioral rules and having those rules enforced in real time.
The thing is, scope based authorization and behavioral authorization are two completely different problems. OAuth scopes tell you "this app can access your calendar." A Permission Contract tells you "this agent can read your calendar but cannot create events, and has to ask before modifying anything." That distinction matters a lot more with AI agents than it does with traditional apps, because agents make autonomous decisions about which tools to call based on conversation context.
The trickiest technical problem was intercepting tool calls inside the Vercel AI SDK's streaming pipeline without breaking the response. The Contract Guard had to wrap every tool before the LLM even saw it, swapping out the execute function with enforcement logic while leaving the schema alone so the model still knew what tools were available. Getting that working with AI SDK v6 (which had minimal migration docs and several breaking changes from v5) took a bunch of iterations.
What caught me off guard was how natural the UX felt once everything clicked. Violations show up as colored cards right in the chat thread, red for denied, amber for needs approval, and the agent acknowledges them in its response. It doesn't feel like something broke. It feels like the agent is actually listening to the rules you set. I think that's the experience every AI agent should aim for.
Built With
- auth0-for-ai-agents
- auth0-token-vault
- next.js
- openai
- react
- tailwind
- typescript
- vercel-ai-sdk
Log in or sign up for Devpost to join the conversation.