About the Project

Inspiration

As AI agents transition from passive assistants to autonomous actors, the traditional security perimeter is failing. Standard OAuth flows were designed for human interaction, not for agents capable of rapid, high-stakes API execution. We built this gateway to transform agent actions into privileged, human-governed sessions, ensuring no sensitive task begins without explicit, informed authorization.

What it does

ConsentChain: Auth0 Agent Governance is a modular security interceptor. It sits between an AI Agent and third-party services like Gmail or GitHub. The gateway evaluates agent intent against a safety schema and only retrieves the necessary credentials from the Auth0 Token Vault after the user approves the specific action. This creates a hard barrier between the agent’s reasoning and the user’s sensitive data.

Technical Execution

The project demonstrates a production-aware implementation of the Token Vault pattern, moving away from monolithic structures toward a decoupled governance model:

  • Governance Engine: A schema-driven inventory (component-inventory.json) defines risk levels and required permissions for every agent tool.
  • Auth0 Integration: We utilized the @auth0/ai-langchain SDK to handle the secure exchange and brokering of third-party tokens.
  • 7-Stage Action Gateway: Every request follows a deterministic pipeline: Identification, Risk Classification, Schema Validation, Human Prompting, Auth0 Token Retrieval, Execution, and Audit Logging.

Token Vault Implementation

In Stage 5 of our gateway, we call the Auth0 Token Vault to fetch third-party access tokens. This is a critical security win: by using Auth0 as the broker, we ensure that plaintext credentials never enter the agent’s prompt context. This effectively neutralizes prompt injection attacks aimed at credential theft, as the agent never "possesses" the keys it uses; it only receives the result of the authorized execution.

Security Model and Empirical Safety

We refer to our approach as Empirical Safety. By combining the Auth0 Token Vault with our 7-stage gateway, we enforce strict permission boundaries. The agent operates under a Just-In-Time (JIT) privilege model, where access is properly scoped and released only for the duration of a single, consented task. This ensures the agent acts only within the boundaries set in our inventory and cannot escalate privileges autonomously.

User Control

The Gateway UI provides granular control over the agentic experience:

  • Scope Transparency: Users see exactly what permissions are being requested before approval.
  • Revocable Governance: Consent is not a blanket "allow." It is a per-action verification that can be audited or revoked instantly, providing a clear trail of how and why consent was granted.

Challenges and Learning

The primary challenge was extracting this governance layer from the broader ConsentChain project into a standalone, modular tool. We learned that the most significant hurdle in Agentic AI is not model intelligence, but the trust architecture. Integrating with Auth0 proved that secure identity brokering is the essential bridge for moving agents from experimental environments into real-world production.

Testing Instructions

To verify the integration, please follow these steps to connect the application to the Auth0 for AI Agents environment:

  1. Prerequisites: Ensure you have an Auth0 account with AI Agents enabled and a Token Vault configured.
  2. Environment: Clone the repository and configure .env.local with your AUTH0_DOMAIN, AUTH0_CLIENT_ID, and AUTH0_TOKEN_VAULT_ID.
  3. Launch: Run pnpm install followed by pnpm dev and navigate to localhost:3000.
  4. The Flow: Click "Simulate Agent Request." Observe the gateway as it hits the Stage 4 (Prompt) phase. Click "Approve" to trigger the Stage 5 (Auth0 Retrieval).
  5. Validation: Verify that the token was fetched securely via the Vault and was never exposed to the agent's internal state or prompt history.

Bonus Blog Post: Solving the "Trust Gap" in Agentic Workflows

When we first began developing ConsentChain, we faced a recurring technical hurdle: how do you give an agent enough power to be useful without giving it the "keys to the kingdom"? Most agent frameworks solve this by stuffing API keys into the environment variables or, worse, the prompt context itself. This creates a massive surface area for prompt injection.

Our achievement during this hackathon was the successful extraction of a modular Action Gateway that leverages the Auth0 Token Vault to solve this "Trust Gap." By moving to a brokered identity model, we realized that the agent doesn't actually need to own the identity; it just needs the authority to act.

Integrating the @auth0/ai-langchain SDK allowed us to refactor our 7-stage governance flow. Previously, we struggled with manual cryptographic handshakes to keep tokens secure. With Token Vault, we replaced complex custom logic with a clean, production-ready exchange. Now, when our gateway reaches "Stage 5," it makes a secure call to Auth0. The token is fetched, used for a specific execution, and never persists within the agent's memory.

This architecture fundamentally changes the security conversation for AI developers. It moves us from "hope-based safety" (hoping the agent doesn't leak keys) to Empirical Safety, where the infrastructure itself makes credential theft impossible. This journey from a monolithic experiment to a modular, Auth0-powered security product has been our most significant technical milestone to date.


Built With

Share this project:

Updates