Inspiration

AI agents running in production make enforcement decisions autonomously. DefenseClaw was built to govern what those agents can run. But governance without identity is incomplete. When a security tool logs "operator blocked skill X," you want to know which operator, with which credentials, and whether a human actually confirmed it. That gap is what this project closes.

The specific trigger was looking at how MCP servers handle credentials today. They either hardcode API keys or store them locally. Neither is acceptable for enterprise deployments. Auth0 Token Vault changes that equation entirely.

What it does

Auth0-ATA adds three Auth0 capabilities to DefenseClaw:

Authenticated operators via Device Flow. Running defenseclaw auth login triggers Auth0's device authorization flow. The operator authenticates in their browser, and all subsequent CLI actions (block, allow, dismiss) are stamped with their Auth0 sub in the audit log. No more anonymous "operator ran block."

Token Vault for MCP credential provisioning. When defenseclaw mcp allow <server> approves an MCP server that needs external API access (GitHub, Slack, Google), the integration triggers Token Vault OAuth consent. Auth0 holds the credential. DefenseClaw stores only the connection name. When OpenClaw invokes the MCP server at runtime, getAccessTokenFromTokenVault() fires and injects the token. The agent never sees raw credentials.

CIBA step-up for HIGH/CRITICAL enforcement actions. When a scan returns a CRITICAL finding, DefenseClaw fires a CIBA push notification to the security operator via Auth0 Guardian. The block only commits on phone approval. Tap approve, block lands in SQLite with the operator's identity attached. Tap deny, the skill installs with a logged override. No silent auto-blocks on critical findings.

How we built it

DefenseClaw has a split architecture: a Python CLI for user-facing commands and a Go sidecar (defenseclaw-gateway) that monitors OpenClaw via WebSocket and enforces policies at runtime. The Auth0 integration touches both layers.

On the Python side, cmd_mcp.py and cmd_skill.py got CIBA hooks before enforcement calls. A new cmd_auth.py handles login/logout/status via device flow. The session token (stored in ~/.defenseclaw/session.json) flows into OrchestratorClient via its existing Authorization: Bearer header support, so the running gateway always knows who initiated the action.

On the Go side, a new internal/auth0/ package handles token validation and Token Vault exchange. The gateway got a new GET /v1/auth/token?connection={name} endpoint that returns short-lived Token Vault access tokens for registered MCP connections at invocation time.

Auth0 config follows the existing subsystem pattern in internal/config/config.go. Credentials live in ~/.defenseclaw/.env and get loaded by the existing loadDotEnvIntoOS() call in root.go. The Auth0 client is injected into APIServer via a SetAuth0Client() method, following the same pattern as SetOTelProvider(), so the constructor signature stays stable.

Challenges we ran into

The hardest part was the split architecture. Python CLI writes SQLite directly for enforcement decisions. The Go sidecar reads those decisions at runtime. Token Vault provisioning had to happen at allow-time (Python layer) but token exchange had to happen at invocation-time (Go layer). Getting that handoff right without coupling the two layers was the core design problem.

CIBA polling is async by nature. The Python CLI is synchronous. We had to implement a polling loop with timeout and backoff rather than blocking indefinitely. The UX distinction between "timed out" and "operator tapped deny" also needed explicit handling so the audit log records the right outcome.

Injecting the Auth0 client into the API server without breaking the constructor signature required the SetAuth0Client() method approach. A small detail, but getting it wrong would have broken every existing caller of NewAPIServer().

Accomplishments that we're proud of

The audit log now has real identity. Every block, allow, and dismiss shows which operator approved it and whether a human confirmed it via CIBA. That makes DefenseClaw's audit trail non-repudiable, which is an actual compliance requirement for enterprise security tooling, not just a demo feature.

The Token Vault integration is architecturally clean. DefenseClaw stores connection names, not tokens. The credential lives in Auth0 and is fetched just-in-time. If a Token Vault connection is revoked, the MCP server loses access on the next invocation with zero DefenseClaw config changes needed.

What we learned

Device flow is underrated for CLI tools. The UX is smooth: print a short code, the user visits a URL, done. For security tooling where you do not want credentials embedded in shell history or environment variables, it is the right pattern.

Token Vault's design separates "who approved this connection" (DefenseClaw's job) from "what credential is used" (Auth0's job). That separation is the right abstraction. Scope changes and credential rotation do not require any DefenseClaw redeployment.

CIBA is genuinely novel for enforcement tooling. Most security tools auto-block and notify after the fact. Requiring explicit human approval before committing a block on a CRITICAL finding is a different model, closer to how a bank confirms unusual transactions before declining them.

What's next for Auth0-ATA

The natural next step is scope escalation detection at runtime: if a skill tries to use a Token Vault connection it did not declare in its manifest at install time, fire a CIBA request and block the exchange. That closes the gap between declared intent and actual runtime behavior.

Longer term: just-in-time credentials (suspend Token Vault connections when a skill is idle, reprovision at invocation) and an Auth-IBOM, a compliance-grade inventory of every OAuth connection the agent holds, with scopes, last-used timestamps, and one-click revocation for end users.

Blog Post

I went into this thinking Token Vault was essentially a secrets manager with an Auth0 logo on it. Store a token, retrieve a token, nothing architecturally interesting. I was wrong, and figuring out why took most of the first day.

The actual model is: your application never retrieves the credential. It exchanges. You send Auth0 a valid session token and Auth0 responds with a fresh, short-lived provider token scoped to the exact connection you declared. The underlying mechanism is RFC 8693 (OAuth 2.0 Token Exchange), with the grant type urn:ietf:params:oauth:grant-type:token-exchange:federated-connection-access-token. That distinction, exchange not retrieval, changes the trust model entirely. The MCP server cannot cache the credential because it never holds it between invocations. If the connection is revoked in Auth0, the next exchange just fails. No config change, no restart, no coordination needed.

The hardest part was not the Token Vault API itself. It was wiring it across DefenseClaw's split architecture. The allow decision happens in the Python CLI at command time. The token exchange has to happen in the Go sidecar at invocation time, potentially hours later. Those two layers cannot share a process. Getting the Custom API Client configured correctly so the Go layer could perform the access token exchange, without the Python layer staying alive, required understanding how Auth0 ties the client credentials to the backend API's audience. The docs cover the happy path cleanly; the split-process case took more digging.

The moment it actually worked, running defenseclaw mcp allow github-mcp, then watching the Go sidecar exchange a token and inject it into an OpenClaw tool call with zero raw credentials ever written to disk, was the clearest demo of what "secure by default" should mean for agentic infrastructure.

Built With

  • auth0
  • auth0-guardian
  • ciba
  • device-authorization-flow
  • openclaw
  • python
  • sqlite
  • token-vault
Share this project:

Updates