SimieBot

Inspiration

We built SimieBot because most AI assistants still have a trust problem: they are either too disconnected to be useful, or too powerful without clear permission boundaries. We wanted to explore a better model for agentic software, one where the assistant can actually take action across a user’s digital life, but only with explicit scopes, visible consent, and step-up approval for higher-risk operations.

The project was also inspired by a simple question: what would an AI assistant feel like if it behaved less like a black box and more like a responsible operator? That led us to focus not just on features, but on security architecture, user control, and the experience of connected-account authorization.

What it does

SimieBot is a secure, chat-based assistant that helps users work across connected services such as Google Drive, Gmail, Calendar, GitHub, Slack, and YouTube.

Core capabilities include:

  • Reading and acting on connected-account data through Auth0 Token Vault
  • Finding unread email, checking calendar events, and summarizing user profile context
  • Browsing, locating, creating, and renaming Google Drive files
  • Running creator workflows from Google Drive to YouTube
  • Uploading a Drive video directly to YouTube, with user-controlled metadata like title, description, and visibility
  • Using explicit approval for high-stakes actions such as publishing content

The core idea is that SimieBot does not just answer questions. It performs real actions, but only within explicit permission boundaries.

How we built it

We built SimieBot as a full-stack web app with a strong separation between user experience, agent orchestration, and authorization.

Frontend:

  • Next.js App Router
  • React-based chat UI with interrupt handling for authorization and approval flows
  • Clear permission and security pages to explain what the assistant can and cannot do

Backend and agent orchestration:

  • LangGraph for agent routing and tool orchestration
  • A multi-node architecture with a general assistant path and a creator workflow path
  • Tool-based execution for Gmail, Calendar, Drive, GitHub, Slack, and YouTube actions

Authorization and security:

  • Auth0 for authentication
  • Auth0 Token Vault for connected-account access
  • Scoped provider access for each tool
  • Async step-up authorization for sensitive actions like publishing to YouTube
  • Explicit approval checkpoints instead of silent execution

Data and media pipeline:

  • Supabase for lightweight thread/history storage
  • AWS S3 for staging creator assets
  • Amazon Nova for edit-planning experiments
  • FFmpeg for render workflows
  • YouTube Data API for publishing

A major focus of the build was making the assistant production-aware: least-privilege scopes, clear escalation points, and failure handling that tells the user what happened instead of hiding it.

Challenges we ran into

The hardest part of the project was not building tools. It was building trustworthy behavior around tools.

Some of the biggest challenges were:

  • Routing the right user request to the right agent path Requests involving Google Drive and YouTube could easily be misinterpreted as a generic file task instead of a creator workflow.

  • Managing connected-account permissions cleanly Different actions required different Google scopes, and we had to be careful not to over-scope the assistant.

  • Handling high-stakes actions safely Publishing to YouTube should never happen silently, so we introduced explicit approval boundaries and step-up authorization.

  • Making authorization UX reliable We ran into interrupt and polling edge cases where authorization flows could loop if not handled carefully. Fixing that taught us a lot about real-world agent state management.

  • Working with structured output from media-planning models Experimental edit-plan generation could fail when model output was not perfectly structured, so we added fallback behavior to keep the workflow moving.

  • Keeping the experience understandable Security is only useful if the user can see what the system is doing, why it needs access, and what will happen next.

Accomplishments that we're proud of

We are especially proud that SimieBot is not just a demo chatbot with mocked actions. It performs real connected-account workflows with real permission boundaries.

Highlights we are proud of:

  • Building the project around Auth0 Token Vault instead of treating security as an afterthought
  • Creating a chat assistant that can safely act across multiple providers
  • Adding explicit approval for high-risk actions like YouTube publishing
  • Supporting both general productivity workflows and creator workflows in one assistant
  • Making Google Drive to YouTube publishing possible through a secure staged workflow
  • Designing an interface that surfaces authorization and approval states instead of hiding them
  • Turning failure cases into product insight, especially around scope design, interrupt handling, and retry behavior

What we learned

This project taught us that agent authorization is not just an infrastructure problem. It is also a product-design problem.

We learned that:

  • Least-privilege access matters much more once an AI agent can take actions on a user’s behalf
  • Users need to understand what permissions are being used, not just click through a consent screen once
  • High-stakes actions need stronger controls than low-stakes read operations
  • Step-up authorization is one of the most important patterns for trustworthy agents
  • Interrupt handling and approval polling need careful design, or even good authorization systems can create confusing UX
  • Production-aware agent systems need good fallbacks, good error messages, and clear state transitions
  • Token Vault is powerful not only because it stores credentials securely, but because it encourages better system design around scoped access and delegated capability

What's next for SimieBot

Our next step is to evolve SimieBot from a strong prototype into a more complete secure agent platform.

Planned next steps include:

  • Better connected-account diagnostics so users can immediately see which scopes are missing
  • Richer Drive actions such as moving, deleting, and organizing files with approval
  • Stronger audit trails for agent actions and approvals
  • Improved creator workflows for thumbnails, captions, and publishing presets
  • Better policy controls for sensitive actions across providers
  • More transparent permission summaries inside the chat experience
  • Better retry/state handling for long-running authorization flows
  • A reusable pattern library for building permission-aware AI agents beyond this project

Bonus Blog Post: Building SimieBot with Auth0 Token Vault

One of the most valuable parts of building SimieBot was seeing how different agent design becomes when authorization is treated as a first-class system concern.

A lot of AI projects start from capability: what tools should the agent have, what APIs should it call, and how much can it automate? We started from the opposite direction. We asked what a trustworthy agent should be allowed to do, how that access should be granted, and when the user should be pulled back into the loop.

That is where Auth0 Token Vault became central to the architecture. Instead of designing around long-lived third-party credentials inside our app, we designed around connected accounts, scoped provider access, and delegated authorization. That changed both the security model and the user experience. The assistant could still perform meaningful actions, but each one had a visible boundary.

For example, reading Drive metadata is not the same as renaming a Drive file, and publishing to YouTube is definitely not the same as summarizing a profile. SimieBot reflects those differences through scoped connections and explicit approval for high-stakes behavior. In practice, that meant building multiple layers of control: provider-level scopes, tool-level restrictions, and step-up approval for sensitive actions.

We also learned that agent authorization is deeply tied to UX. A technically secure system can still feel broken if approval polling loops, if missing scopes are unclear, or if the assistant cannot explain why an action stopped. Some of our most important work ended up being around interrupt handling, retry logic, and making the assistant communicate permission state clearly.

The biggest takeaway from this project is that Token Vault is not only a secure credential pattern. It is also a design pattern for building better AI agents. It encourages developers to think in terms of explicit capability, user control, and safe delegation. We believe that pattern is going to matter far beyond this hackathon, especially as more agents move from answering questions to taking real actions in real user accounts.

Built With

  • auth0
  • langchain
  • langgraph
  • nextjs
Share this project:

Updates