Inspiration

AI agents are becoming increasingly powerful, but they often start with excessive permissions, which can lead to unintended actions and security risks. We were inspired by the idea that AI systems should behave more like humans in secure environments — starting with limited access and earning trust over time.

We also wanted to explore how identity and access management, through tools like Auth0, could be applied to AI agents. This led us to design a system where agents must request, justify, and earn permissions before taking meaningful actions.

What it does

TrustLayer is a system where AI agents operate under a trust-based permission model.

The agent:

starts with zero permissions requests minimal access (like repo:read) analyzes real GitHub code using AI earns trust based on successful actions unlocks higher permissions (like repo:write) can take real actions, such as creating a pull request

All actions are logged, transparent, and can be revoked at any time by the user.

This ensures that AI agents are:

  • secure
  • auditable
  • and always under user control

How we built it

We built TrustLayer using a full-stack architecture:

Frontend: React dashboard to visualize trust, permissions, and agent actions Backend: Node.js + Express to manage logic, scoring, and integrations AI: OpenAI API for code analysis and PR generation GitHub API: for real repository access and pull request creation Auth0 Token Vault: for secure permission handling and identity-based access

The system connects all these components into a unified flow:

  • permission request
  • trust evaluation
  • action execution

Challenges we ran into

One of the biggest challenges was managing the balance between a realistic system and a reliable demo.

  • Debugging GitHub API permissions (especially token scopes like Contents: Write)
  • Handling base64 decoding issues when fetching repository files
  • Ensuring the agent was analyzing real code instead of fallback samples
  • Preventing misleading UI states (like simulated PRs appearing real)
  • Designing a trust system that felt fair and didn’t block the demo flow

We also had to ensure that all integrations — OpenAI, GitHub, and Auth0 — worked together smoothly in real time.

Accomplishments that we're proud of

Successfully creating real GitHub pull requests generated by an AI agent Building a trust-based permission system instead of giving agents full access upfront Designing a clean, interactive UI that clearly shows agent behavior and decision-making Integrating multiple systems (AI, GitHub, Auth0) into a cohesive workflow Turning a complex concept (AI security + trust) into a clear and demonstrable product

What we learned

AI systems need structured control mechanisms, not just intelligence Permissions and identity systems (like Auth0) are critical for safe AI deployment Small implementation details (like token scopes and API responses) can break entire workflows Designing for clarity and trust in the UI is just as important as backend functionality Real-world integrations (like GitHub) significantly increase the credibility of a project

What's next for TrustLayer: Secure AI Agents with Permission-Based Control

We see TrustLayer evolving into a full platform for managing AI agents in real-world environments.

Next steps include:

expanding support beyond GitHub to other APIs and services introducing more advanced trust models (behavioral scoring, anomaly detection) integrating deeper with identity providers for enterprise use cases enabling multi-agent systems with shared trust policies adding monitoring and alerting for suspicious agent behavior

Ultimately, we aim to make TrustLayer a foundational system for deploying AI agents safely at scale.

Built With

Share this project:

Updates