Inspiration

AI agents can already read repositories, send emails, and modify production systems, but most of them still rely on long-lived API keys hidden in .env files. That means one misconfigured agent can act with far more power than it should, and teams often have no clear record of what happened, why it happened, or whether a human truly approved it.

We wanted to build the missing safety layer: an authorization gateway that sits between the agent and real-world APIs, forces every action through policy, uses Auth0 to issue delegated credentials only when needed, and keeps a human in the loop for high-risk actions.

What it does

OpenClaw Airlock is an authorization gateway for AI agents. Instead of letting the agent call GitHub, Gmail, or Slack directly, the agent submits a structured intent such as github_merge_pr. The Airlock classifies the intent into one of four tiers:

  • GREEN: safe reads, auto-executed
  • AMBER: low-risk writes, auto-executed with full audit
  • RED: high-impact writes, paused until a human re-authenticates and approves
  • BLOCKED: permanently denied

For allowed actions, OpenClaw Airlock uses Auth0 Token Vault to exchange the user's Auth0 token for a short-lived provider token. That delegated token is used for exactly one allowlisted API call and then discarded immediately.

A real-time Authorization X-Ray dashboard shows the whole chain as it happens: the matched policy rule, approval state, scopes, token lifecycle, and append-only audit log. In our demo, the system reads GitHub issues, adds labels, pauses a pull request merge for Auth0 step-up approval, and permanently blocks destructive actions like delete_repo.

How we built it

We built the backend with Node.js, Express, Prisma, PostgreSQL, and Socket.io. The dashboard uses Next.js 14, Tailwind CSS, Framer Motion, and Auth0. The agent is a TypeScript LangChain runner with tool definitions that generate intents instead of direct API calls.

Auth0 is the core of the trust model. We use Universal Login for user identity, Token Vault for delegated provider credentials, CIBA for async approvals, and a custom API audience for Airlock JWT verification. The result is an end-to-end chain of trust from user authentication to one-time API execution.

Challenges we ran into

The hardest part was getting the Auth0 Token Vault federated connection exchange exactly right. The subject_token, subject_token_type, client configuration, and connection name all had to line up correctly, and we needed the exchange to work with live GitHub actions rather than mocked tokens.

Another major challenge was step-up approval for RED-tier actions. We wanted a real re-authentication event, not just a confirmation dialog. That meant making sure a stale cached session could never be reused for a protected merge, and only a freshly issued Auth0 token from prompt=login could unlock the action.

Accomplishments that we're proud of

We built a full working chain: agent -> Airlock -> Auth0 Token Vault -> GitHub, with no long-lived GitHub token stored in the application layer. We also built a live dashboard that exposes every authorization decision in real time, including token requested, token exchanged, action executed, and token cleared.

Most importantly, we proved the model on real GitHub actions. The demo does not fake success states. It actually reads issues, adds labels, pauses merges for human approval, and blocks destructive actions before any token is exchanged.

What we learned

OAuth scopes alone are not enough for AI agents. Even if a provider token is valid, you still need a runtime policy layer that understands intent, risk, and business boundaries. We learned that Auth0 Token Vault becomes much more powerful when combined with risk-tiered policy and step-up authentication: instead of giving an agent a standing credential, you issue a short-lived delegated credential only for the exact action that is allowed right now.

What's next

We want to add finer-grained per-repository allowlists, policy-as-code with OPA, anomaly detection on the audit trail, and more connectors for enterprise systems. The long-term goal is to make “authorized to act” a real property of AI systems, not just a claim in their documentation.

Bonus Blog Post

OpenClaw Airlock started with a question that kept bothering us: if an AI agent can merge code, send email, or post to Slack, where should that power actually live? The default answer in most prototypes is a token in an .env file. That works, but it creates the exact failure mode we wanted to avoid: the agent quietly holds a broad, long-lived credential, and a log entry after the fact does not make that safe.

That realization changed the whole project. Instead of letting the agent call provider APIs directly, we redesigned the system around intents. The agent declares what it wants to do, the Airlock classifies the action by risk, and only then do we ask Auth0 Token Vault for a delegated credential. In other words, we stopped treating credentials as static configuration and started treating them as something that should be issued dynamically, at the last possible moment, for one specific action.

The hardest part was making that flow real rather than theoretical. We had to get the federated connection token exchange working end to end, including the correct subject_token, subject_token_type, client settings, and connection mapping. We also discovered that high-risk approvals needed more than a button click. For RED-tier actions, we built a true step-up flow with prompt=login, so only a freshly issued Auth0 session can authorize a protected merge. A stale cached session is not enough.

The turning point came when we watched a real GitHub pull request stay blocked, trigger re-authentication through Auth0, receive a short-lived delegated token from Token Vault, merge successfully, and then disappear from memory while the full lifecycle appeared in the audit trail. That moment captured the whole lesson of the project: the safest agent credential is the one the agent never gets to keep.

Built With

Share this project:

Updates

posted an update

Really proud of what we’ve built together as a teamThis project is our take on making AI agents safer using OAuth 2.0 and delegated access instead of long-lived credentials. Huge thanks to Auth0 for the platform it made this possible. We’ve put in a lot of effort to build this end-to-end, and we’re excited for the judges to check it out. Looking forward to your feedback!

Log in or sign up for Devpost to join the conversation.