Inspiration
Our inspiration came from a simple but overlooked gap:
The AI industry is building the action layer, but nobody is building the proof layer.
Today, AI agents can:
- Send legal notices
- Merge production code
- Execute financial workflows
But there is no verifiable proof that:
- the user actually authorized the action
- the exact payload was executed
- the action was performed within defined boundaries
Existing systems rely on:
- Screenshots (forgeable)
- Logs (mutable)
From a compliance perspective, this means:
$$ \text{AI Action} \neq \text{Provable Consent} $$
This creates a massive trust gap for enterprises adopting autonomous AI.
We built WitnessKey to solve this.
What it does
WitnessKey is a verifiable consent and compliance layer for AI-driven operations.
It transforms every high-stakes AI action into a cryptographically verifiable, audit-ready event.
🔁 Core Flow
- AI agent attempts a high-risk action
- WitnessKey intercepts the workflow
- Step-up authentication is triggered via Auth0
- A scoped, one-time token is issued from Auth0 Token Vault
- Action executes using that token
- A Witness Certificate is generated
🔐 What makes it powerful
Each action produces a tamper-evident record:
$$ \text{Witness Record} = H(\text{payload}) + \text{token metadata} + \text{consent event} $$
Where:
- $H(\text{payload})$ = cryptographic hash of action payload
- Token metadata = scope, expiry, token ID
- Consent event = step-up authentication proof
🌐 Real Integrations
WitnessKey works across real systems:
- Gmail → Send legal notices
- GitHub → Merge pull requests
- Slack → Post internal announcements
This proves that WitnessKey is not a demo — it is a cross-platform control layer.
🧾 Witness Certificate
Every action generates a verifiable certificate (PDF) containing:
- Action details
- Timestamp
- Payload hash
- Auth0 Token Vault fingerprint
- User consent proof
- QR code for verification
🔍 Public Verification
Anyone can verify a certificate via: /verify?hash=...
Which ensures:
$$ \text{Certificate Validity} = \text{Stored Record} == \text{Recomputed Hash} $$
Bonus Blog Post
Building WitnessKey forced us to confront a problem we hadn’t initially planned for: how do you prove that an AI agent acted with user consent, not just assume it did?
At first, we tried the obvious approach — logging actions and storing execution metadata. But very quickly, we realized logs are fundamentally weak. They can be modified, replayed, or simply lack the cryptographic linkage needed for real-world trust. That’s when we shifted from “tracking actions” to proving authorization.
This is where Auth0’s Token Vault became central to our design.
The biggest challenge wasn’t integrating authentication — it was turning tokens into verifiable execution boundaries. We needed tokens that were:
- scoped to a single action
- short-lived
- never exposed to the AI agent directly
Using Token Vault, we designed a flow where OAuth tokens (for Gmail, GitHub) are securely stored and only retrieved at the moment of execution. The agent never sees raw credentials — instead, it operates through a controlled backend that fetches tokens on-demand with strict scopes.
One breakthrough moment came when we visualized token usage in real time — showing a token moving from “unused” to “consumed” exactly when an action executes. That made the system not just secure, but observable and trustworthy.
Another challenge was tying identity to execution. Step-up authentication via Auth0 allowed us to create a precise “moment of consent,” which we then bind to the token and embed into the Witness Certificate.
In the end, Token Vault wasn’t just a storage layer — it became the enforcement layer that makes AI actions accountable.
Also written a detailed blog on Medium and published. Link below Blog Page: https://medium.com/@priyanshuagrawal801/the-invisible-crisis-in-agentic-ai-why-we-need-a-proof-layer-and-how-we-built-it-with-auth0-2d4b4e8ceacf
How we built it
🧱 Architecture
- Frontend: Next.js (dashboard + agent console)
- Backend: FastAPI (execution + witness engine)
- Auth Layer: Auth0 (with Token Vault)
- Integrations: Gmail, GitHub, Slack APIs
- PDF Engine: Dynamic certificate generation
🔐 Auth0 Token Vault (Core Innovation)
We used Auth0 Token Vault as an active execution control layer, not just storage:
- Store OAuth tokens (Gmail, GitHub) securely
- Retrieve tokens per action with scoped permissions
- Ensure:
- no raw token exposure
- strict TTL (time-limited access)
- no raw token exposure
$$ \text{Token}_{action} = \text{Scoped} + \text{Ephemeral} + \text{User-approved} $$
⚡ Real-Time Execution Timeline
We built a live execution system:
- Step-up authentication
- Token issuance
- Action execution
- Witness record creation
All streamed in real-time via event updates.
Challenges we ran into
- Integrating Auth0 Token Vault in a visible, verifiable way
Ensuring tokens are:
- scoped correctly
- never exposed to the agent layer
- scoped correctly
Designing a certificate that feels:
- legally credible, not just UI-generated
- legally credible, not just UI-generated
Creating a real-time execution timeline without overengineering
Making the demo feel:
- real
- fast
- trustworthy
- real
Accomplishments that we're proud of
- ✅ Built a real, working system with live integrations (Gmail, GitHub)
- ✅ Made Auth0 Token Vault usage visibly undeniable in UI
- ✅ Created a verifiable certificate system with public validation
- ✅ Designed a product that feels like a compliance SaaS, not a demo
Most importantly:
We didn’t just automate actions — we made them provable.
What we learned
- AI systems need accountability layers, not just intelligence
Enterprises care more about:
- auditability
- compliance
- liability
- auditability
Trust in AI requires:
$$ \text{Trust} = \text{Control} + \text{Verification} $$
- Simply logging actions is not enough —
you need cryptographic linkage between intent and execution
What's next for WitnessKey
We are turning WitnessKey into a full compliance layer for autonomous operations.
🚀 Next Steps
Expand integrations:
- DocuSign (contracts)
- Payment systems (financial actions)
Build:
- Organization-wide audit dashboards
- Compliance reports (SOC2-style for AI actions)
- Organization-wide audit dashboards
Introduce:
- Policy engine (who can authorize what)
- Risk-based adaptive authentication
- Policy engine (who can authorize what)
🌍 Long-Term Vision
WitnessKey becomes the standard trust layer for AI agents
Where every action satisfies:
$$ \text{AI Execution} \rightarrow \text{Verified} \rightarrow \text{Auditable} \rightarrow \text{Compliant} $$
WitnessKey ensures that as AI agents gain autonomy,
organizations don’t lose control.
Built With
- auth0
- crypto
- fastapi
- google-auth
- nextjs
- postgresql
- python
Log in or sign up for Devpost to join the conversation.