Inspiration
We kept seeing the same tension: AI agents only feel useful when they can reach real systems (email, repos, chat), but putting OAuth tokens next to models, tool traces, or browser storage scales badly. One bad prompt or log line should not mean “the agent effectively owns your GitHub.”
We wanted something that felt fair to users: delegation with boundaries, not all-or-nothing API access. Auth0 already solves human identity and provider consent well; the missing piece for us was a runtime gate that says “this call is allowed for this mission, right now, under these policies” before any provider API runs. That became Tether.
What it does
Tether is a mission-scoped authorization and execution layer for agents.
- Users sign in with Auth0 and connect GitHub, Gmail / Google Calendar, and Slack through Auth0-driven OAuth. Provider tokens are exchanged and refreshed server-side, encrypted at rest, and never handed to the agent.
- Users describe work in natural language; the app proposes a mission manifest (what the mission will and will not do), with a skeptical intent audit pass.
- The user approves the mission (including from a mobile-friendly flow). While a mission is active, agents call MCP or REST (
agent-action) with the user’s Auth0 JWT and a mission id only. - Every action is checked against the mission, global policies, and step-up re-authentication for configured high-risk tools (for example destructive or bulk operations). Outcomes land in an execution ledger so allows and denials are visible.
In short: the agent asks; Tether decides; secrets stay on the server.
How we built it
- Frontend: React (Vite), Auth0 via
@auth0/auth0-react, Supabase client with the Auth0 access token for RLS-aligned data access. - Backend: Supabase Edge Functions (Deno) for the sensitive paths:
auth0-token-vault(connect, reauth, callback, token exchange with Auth0),agent-actionandmcp-serverfor tool execution, plus manifest/policy/nudges generation, mission approval, step-up helpers, and user settings. - Data: PostgreSQL with RLS, encrypted rows for connected-account secrets, missions, policies, and execution history.
- Auth model: JWT verification against Auth0 JWKS inside functions; provider access tokens refreshed through Auth0’s token endpoint when needed (
oauth-tokenhelper).
We documented the Token Vault–aligned flow for judges in judges-token-vault-proof.md.
Challenges we ran into
- Split-brain auth: The SPA uses Auth0, not Supabase Auth, so every Edge Function had to agree on issuer, audience, and what “authenticated” means for Realtime and RLS. Misaligned
VITE_AUTH0_AUDIENCEandAUTH0_AUDIENCEsecrets caused confusing 401s until we treated it as one checklist. - OAuth callback ergonomics: Provider linking returns through a Supabase function URL; we added safe
returnPathhandling so users land back on the screen they started from (mission detail, approval, accounts). - Demo vs production: Judges need a crisp video, but live AI and live APIs can flake. Demo mode keeps real Auth0 and blocking behavior while stubbing selected provider results so the story stays reliable.
- Explaining “Token Vault”: The product story and the exact Auth0 dashboard knobs have to stay aligned so reviewers see delegated OAuth and vaulting, not a generic “we use Auth0 login” app.
Accomplishments that we're proud of
- A coherent security story end to end: missions, policies, ledger, step-up, and no provider tokens in the agent path.
- MCP plus REST so both “chat agent” and scripted clients can use the same enforcement layer.
- Operator-grade UX for a hackathon scope: dashboard, connected accounts, mission lifecycle, settings, and a path to approve from mobile.
docs/judges-token-vault-proof.md: a single place that maps claims to files and flows so technical judges can verify quickly.
What we learned
- Connect-time OAuth is not enough for agentic systems. Users need to see what runs when, and systems need deny-by-default execution checks on every tool call.
- Auth0 is strongest when it owns identity and delegated provider access, and the app owns authorization semantics (missions, policies, audit). Mixing those concerns makes demos brittle.
- Good operator docs (env checklists, smoke tests, judge proof) are as important as features when strangers have to run your stack.
What's next for Tether
- More integrations and action definitions with the same mission + policy pattern.
- Tighter admin / enterprise stories: orgs, shared policies, export of audit logs.
- Deeper Auth0 for AI Agents alignment where it improves custody and refresh (always documented honestly next to the code).
- Hardening pass on rate limits, anomaly signals, and recovery flows when refresh or step-up fails.
Bonus Blog Post
Token Vault & agent authorization
I kept running into the same uncomfortable gap when I thought about “real” AI agents: they only become useful when they can touch your tools (email, repos, Slack), but I never wanted those OAuth tokens anywhere near a model context, a log line, or a browser bundle I don’t fully control. So I built Tether around a simple rule I actually believe: the agent can ask; the platform decides; the secrets stay server-side.
For me, the Token Vault angle isn’t buzzword bingo. It’s the shape of the problem Auth0 is trying to solve. I send people through normal OAuth consent via Auth0 for GitHub, Google (Gmail and Calendar), and Slack. The messy part (codes, refresh tokens, rotation) gets handled behind the SPA. I encrypt what has to sit in my database and only decrypt it inside Edge Functions when a mission and policy say the action is allowed. When I demo MCP or a REST client, I’m not handing anyone a Gmail token. I’m handing the system a user JWT and a mission id, and either the call goes out or it gets blocked and logged. That felt like the honest version of “authorized to act.”
What I’m proudest of is how it changes the story for the person using it. They see what the mission will and won’t do, they approve on purpose, and if something is scary (delete repo, bulk export), they hit step-up instead of the model “just trying.” I’m not claiming we solved agent safety in one repo, but I am arguing for a pattern I’d actually ship: identity and consent live where humans already trust them, and the agent never becomes the custodian of third-party keys.
If this resonates with the Auth0 community, I hope it’s as a practical reference: Token Vault thinking, for me, means custody and refresh belong in the identity layer, and every tool call still needs runtime authorization, not a one-time “connect and pray.”
Optional: timing intuition (LaTeX)
We treat mission enforcement as time-bounded: a tool call is only eligible while the mission is active and unexpired. You can think of eligibility in a simple window:
$$ \text{eligible}(t) = \mathbb{1}[\,\text{status} = \text{active}\,] \cdot \mathbb{1}[\,t_{\text{start}} \le t \le t_{\text{end}}\,] $$
(where (\mathbb{1}[\cdot]) is an indicator: (1) if true, (0) if false). Tether adds policies, scopes, and step-up on top of that baseline.
Built With
- auth0
- edge-functions
- github-api
- google-gmail-calendar-api
- mcp
- postgresql
- react
- slack-api
- supabase
- tailwind-css
- tokenvault
- typescript
- vite


Log in or sign up for Devpost to join the conversation.