Project Fulcrum: Zero-Trust AI Security Agent
Inspiration
The rise of AI agents has created a security paradox. These agents promise to automate complex workflows across multiple services—GitHub, Jira, Slack, and more—but they typically require storing permanent credentials (API keys, tokens) in environment variables. A single prompt injection attack could give an attacker access to everything: delete all repositories, modify critical infrastructure, exfiltrate sensitive data.
We asked ourselves: What if AI agents never held any permanent credentials at all? What if they borrowed identity from a secure token vault, requested only the scoped permissions they needed, and required explicit human approval for dangerous actions?
Auth0 Token Vault made this vision possible. Combined with Fine-Grained Authorization (FGA) and Client Initiated Backchannel Authentication (CIBA), we could build an agent that operates within explicit permission boundaries—a true zero-trust AI system.
What It Does
Fulcrum is a security auditing agent that orchestrates workflows across GitHub, Jira, and Slack without holding any permanent credentials. It demonstrates three core capabilities:
Multi-Service Orchestration: Users can ask the agent to perform complex tasks that span multiple services. For example: "Scan my GitHub repos for secrets, create a security issue in Jira, and notify my team on Slack." The agent coordinates all three services seamlessly.
Zero-Trust Credential Management: Instead of storing tokens in .env files, Fulcrum uses Auth0 Token Vault to exchange refresh tokens for short-lived, scoped access tokens at runtime. If the agent process is compromised, an attacker gains nothing—only useless, expired tokens.
Risk-Based Authorization: Every tool is assigned a risk level (1-5). Levels 1-4 are automatically authorized by FGA. Level 5 actions (destructive operations like deleting branches or merging PRs) require explicit human approval via Auth0 Guardian push notification.
Complete Audit Trail: Every action—whether allowed, denied, or approved—is logged with FGA results, CIBA status, execution time, and outcome. This creates an immutable record of agent behavior.
How We Built It
The architecture combines three Auth0 services in a novel way:
Token Vault Layer: We implemented a secure token exchange pipeline. When the agent needs to act on a service, it calls getTokenForConnection(), which uses the user's refresh token to obtain a fresh, scoped access token from the target service's OAuth provider. This happens at runtime, never storing tokens.
FGA Permission Layer: Before every tool execution, we check Auth0 FGA with a tuple like (user:alice, can_execute, action:github_delete_branch). If the relationship doesn't exist, the action is blocked before it even reaches the tool.
CIBA Human Approval Layer: For level 5 actions, we initiate an Auth0 CIBA request. This sends a push notification to the user's phone, asking "Fulcrum wants to delete branch: main. Approve?" The user responds with biometric authentication. The agent waits for approval before proceeding.
Agent Orchestration: We use Google's Gemini 2.0 Pro with LangGraph to build a state machine that orchestrates the entire flow: Planning (which tool to call) → Permission Check (FGA) → Approval Wait (CIBA if needed) → Execution → Logging.
Development Stack: Next.js frontend, Express.js backend, PostgreSQL for state persistence, LangGraph for agent workflow, and Auth0 services for security.
Challenges We Ran Into
CIBA Implementation Complexity: Auth0 CIBA was initially difficult to debug. The login_hint format must be exact JSON with specific fields. Guardian enrollment required careful Auth0 configuration. We implemented dev mode auto-approve to enable rapid testing.
Token Vault Scope Matching: Ensuring that Token Vault scope requests matched what Auth0 expected required careful coordination between the auth0 service, the tool definitions, and the FGA schema.
Jira Multi-Site Support: Jira cloud accounts can have multiple sites. We had to implement site discovery and selection to route requests correctly.
State Persistence Across Approvals: Getting the LangGraph state machine to pause at CIBA, wait for user approval, then resume from exactly the right state required careful checkpoint management.
Error Recovery: When Token Vault unavailable or FGA offline, the agent needed graceful fallback without exposing sensitive data.
Accomplishments We're Proud Of
We successfully demonstrated the core hackathon requirement: a production-aware implementation of Auth0 Token Vault that never exposes raw credentials. The agent works end-to-end—it can perform real multi-service operations.
We implemented all three Auth0 services (Token Vault, FGA, CIBA) in a cohesive security model. Most projects use one or two; we integrated all three to create a genuinely zero-trust system.
The audit logging is comprehensive and immutable. Every decision—FGA allow/deny, CIBA approve/deny, tool success/failure—is recorded with context. This creates transparency and enables security analysis.
The architecture is production-aware. We handle credential rotation, request retries, circuit breakers, connection pooling, and graceful degradation when dependencies fail.
What We Learned
The most important insight: Zero-trust architecture for AI agents is not just possible—it's practical. By using Auth0's credential and authorization services, we can eliminate the class of agent compromise vulnerabilities entirely.
We learned that agent behavior transparency matters. Users and security teams need to see exactly what the agent tried to do, what permissions it checked, what approvals it requested, and what the outcome was. The audit trail is as important as the agent itself.
We discovered that risk-stratification with human approval is key. Not all actions need the same level of scrutiny. Low-risk reads can be instant; high-risk destructive actions benefit from human confirmation.
We learned the importance of graceful degradation. The system works with Cloud SQL, but gracefully falls back to in-memory storage for local development. The agent works with CIBA Guardian, but has dev mode auto-approve for testing.
What's Next for Project Fulcrum
Expanded Service Coverage: Add more integrations (AWS, Azure, GCP APIs, custom webhooks) to show the pattern scales.
Advanced Agent Reasoning: Use Claude's extended thinking to make the agent reason about complex multi-service scenarios before executing.
Performance Optimization: Cache FGA results more aggressively; implement batch operations for bulk actions.
Threat Modeling: Build honeypot operations to detect and log adversarial prompts that attempt to manipulate the agent.
Community Patterns: Document the zero-trust agent pattern as a reusable library so other developers can build secure agents with Auth0.
Demonstration of Real-World Value: Partner with security teams to show how this reduces incident response time—agents can audit at machine speed while respecting human authority for risky decisions.
The future of AI agents is not unlimited autonomy. It's bounded agency within explicit permission boundaries. Fulcrum shows what that looks like.
Bonus Blog Post
Building Fulcrum started with a simple but unsettling realization. After recent supply-chain incidents like the npm axios-related compromises, I kept coming back to one question: why are we still giving AI agents permanent, high-privilege tokens? Every tutorial I saw followed the same pattern—store secrets in .env, grant full access, and trust the system won’t break. That didn’t feel like engineering; it felt like gambling.
Discovering Auth0 Token Vault changed my entire approach. Instead of letting the agent own credentials, I redesigned Fulcrum so it borrows them. Tokens are requested just-in-time, scoped to the exact permission, and expire within minutes. The first time I saw a read-only GitHub token get issued, used, and then become useless shortly after, it genuinely felt like a shift toward how secure AI systems should be built.
The journey wasn’t straightforward. Implementing the token exchange flow required multiple iterations to correctly handle refresh tokens and scoped access. Integrating Auth0 FGA added another layer—every action had to be explicitly authorized before execution.
But the most rewarding challenge was setting up CIBA. Getting Auth0 Guardian approvals working smoothly, especially debugging login_hint, took time. Yet when I received a real-time push notification asking me to approve a destructive action with biometric authentication, the vision clicked.
Fulcrum isn’t just a project—it’s a statement. AI agents shouldn’t operate on blind trust. With Auth0 Token Vault, FGA, and CIBA, I explored a model where agents act within strict, verifiable boundaries.
Built With
- auth0ciba
- auth0fga
- auth0guardian
- auth0tokenvault
- authenticationoauth2
- circuitbreaker
- drizzle
- express.js
- githubapi
- googlecloudrun
- googlecloudsql
- googlevertexai(gemini2.0pro)
- jiraapi
- langchain
- langgraph
- lucidereact
- next.js
- node.js
- octokit
- pgsql
- postgresql
- react
- slackapi
- tailwindcss
- typescript
Log in or sign up for Devpost to join the conversation.