About the Project

Inspiration

Organizing large-scale tech events — hackathons, GDG galas, conferences — is a brutal coordination problem. You're simultaneously hunting for sponsors, designing marketing materials, parsing venue contracts, building timelines, drafting outreach emails, managing budgets, and pushing updates to Discord. Every task lives in a different tool, a different tab, a different person's head. We kept asking ourselves: What if you could just tell a system what needs to happen, and it figured out the rest — in parallel?

That question led to EventOS. We were inspired by the idea of a "mission control" interface — something that feels like you're commanding an entire operations team from a single terminal. The multi-agent AI paradigm gave us the architecture to make it real: instead of one monolithic chatbot, we built a fleet of domain-expert agents that the user orchestrates through natural language.

What it does

EventOS is a multi-agent event management platform where you type a single natural-language command and multiple specialized AI agents execute in parallel. The system provides:

  • Command Center — A live terminal UI where you issue directives like "Find 10 tech sponsors, build a timeline for our hackathon, and generate a hype video." The Master Brain parses your intent and simultaneously dispatches the relevant agents.
  • Marketing Factory — Two sub-agents: a Creative Designer (Gemini Flash Image) for images/flyers, and a Cinematic Creator (Veo 3.1) for promotional videos.
  • Sponsor Scout — A Web Scraper (Google Custom Search API) discovers companies, then a Tier Matcher (Gemini) evaluates each lead, assigns sponsorship tiers (Platinum/Gold/Silver/Bronze), estimates dollar values, and exports an Excel spreadsheet.
  • Project Manager — Converts natural-language goals into a structured timeline with milestones and tasks, cross-referencing extracted venue constraints to prevent scheduling conflicts.
  • Compliance Shield — Uploads venue PDFs, extracts them with PyPDF2, then uses Gemini to identify hard constraints (noise limits, load-in windows, fire codes, insurance deadlines).
  • Communications — A Discord Sub-agent that creates servers and pushes updates via webhooks, and an Email Sub-agent that drafts personalized outreach with Gemini and sends from the user's own Gmail via OAuth.
  • Finance — A Budget Planner that generates category-level budgets with Excel export, and an Expense Tracker that logs spending and flags overruns in real-time.
  • Context Agent — A web researcher that searches, fetches, and summarizes relevant context (e.g., sponsorship benchmarks, competitor events) to enrich the other agents' outputs.

All agent outputs flow into MongoDB, and the frontend dynamically renders them across five dashboard views: Command Center, The Vault (assets), Sponsor Hub, Logistics (timeline + compliance), and Finance.

How we built it

The frontend is a React + TypeScript single-page application scaffolded with Vite, styled with TailwindCSS using a custom "Gilded Noir" dark theme (obsidian blacks, metallic golds, antique brass). We used shadcn/ui + Radix UI for component primitives, Framer Motion for animations, Recharts for data visualization, and React Query for server-state management. The design language centers around a spinning golden icosahedron wireframe that serves as the system's visual heartbeat.

The backend is a Python FastAPI server — the entire orchestration engine. At its core:

  1. Master Brain — A Gemini 3.1 Pro prompt with response_mime_type: "application/json" that routes user intent into a structured dispatch map. It supports 12 distinct intents and extracts parameters for each.
  2. Orchestrator — Uses asyncio.gather() to fire matched agents in true parallel. Each agent receives a shared asyncio.Queue for log streaming.
  3. SSE (Server-Sent Events) — Real-time log delivery from backend → frontend via sse-starlette. The terminal updates line-by-line as agents work, with heartbeat keep-alives.
  4. Persistent Logs — Every AgentLog is written to MongoDB's terminal_logs collection in the background, so terminal history survives page refreshes and project switches.
  5. Google OAuth — Full login flow with JWT token management. The user's Gmail access token is stored so the Email Sub-agent can send from their authenticated account.
  6. GPU Gateway — An inference gateway client that talks to a Vultr A40 instance running Stable Diffusion and CogVideoX behind a FastAPI endpoint, with exponential-backoff retry logic.

State management uses MongoDB Atlas (async via motor) with collections for projects, assets, leads, roadmap, rules, budgets, context, terminal_logs, and agent_logs. The frontend uses a custom EventBusContext (React Context) that maps agent names to UI panels, tracks active agents/sub-agents, manages tab notifications, and handles SSE stream lifecycle.

Authentication uses Google OAuth 2.0 with scopes for openid, userinfo.email, userinfo.profile, and gmail.send. The backend exchanges the auth code for tokens, upserts the user in MongoDB, generates a JWT, and redirects back to the frontend.

Challenges we ran into

  • Parallel agent coordination: Getting asyncio.gather() to play nicely with SSE streaming was non-trivial. We needed a PersistentLogQueue that both pushes to the SSE stream and saves to MongoDB without blocking the event loop.
  • Migrating from n8n to Python: We originally prototyped the orchestration layer in n8n. When we hit its limits on parallelism and custom logic, we rebuilt the entire backend from scratch in Python during the hackathon — a significant pivot.
  • Gemini JSON reliability: Getting Gemini to return consistently parseable JSON (especially for complex multi-field outputs like tier matching and budget planning) required careful prompt engineering, response_mime_type enforcement, and robust fallback parsing with markdown fence stripping.
  • OAuth + Gmail integration: Implementing a full Google OAuth flow that also requests gmail.send scope — and then using those stored tokens to send emails from the user's own inbox — involved wrestling with token refresh, PKCE complexities, and credential serialization.
  • Real-time UI state tracking: Mapping backend agent names (IMAGE_SUBAGENT, TIER_MATCHER, etc.) to frontend panel highlighting, sub-agent spinners, and cross-tab notifications required a carefully designed mapping layer in the EventBusContext.

Accomplishments we're proud of

  • True parallelism: A single prompt like "Find sponsors and generate a hype video" fires the Sponsor Scout and Marketing Factory simultaneously — both streaming logs to the terminal in real-time.
  • End-to-end email: You can say "Email Sarah at Vercel about sponsoring our hackathon", and EventOS drafts a personalized email with Gemini, then sends it from your actual Gmail account — no copy-paste required.
  • Cross-agent intelligence: The Project Manager agent cross-references the Compliance Shield's extracted venue constraints before generating timelines, preventing conflicts like scheduling load-in during noise curfew hours.
  • The design aesthetic: The "Gilded Noir" theme with the spinning icosahedron, pulsing gold agent indicators, and live terminal creates a premium mission-control experience that feels like commanding an operations team.

What we learned

  • Agentic architecture patterns: How to design a router-dispatcher-worker pattern where the "brain" only classifies intents and the orchestrator handles parallel execution and result aggregation.
  • SSE vs. WebSockets: SSE turned out to be the better fit for our use case — unidirectional server→client log streaming with automatic reconnection, simpler than full WebSocket management.
  • Prompt engineering for structured output: Using response_mime_type: "application/json" with a rigorous schema in the system prompt is dramatically more reliable than asking Gemini to "return JSON" in prose.
  • The power of asyncio.gather(): Python's native async primitives are surprisingly effective for multi-agent orchestration when you don't need the overhead of heavy frameworks like LangGraph or CrewAI.

What's next for EventOS

  • Live GPU inference: Fully deploying the Stable Diffusion and CogVideoX models on the Vultr A40 instance so image and video generation is real, not placeholder URLs.
  • Agent memory and learning: Giving agents access to prior conversation context and past project data so they improve over time — e.g., the Tier Matcher remembering which sponsors converted.
  • Collaborative multi-user: Real-time shared terminal for organizing teams, with role-based permissions (admin vs. volunteer).
  • Mobile companion app: Push notifications from the Discord Sub-agent and budget alerts directly to organizers' phones.
  • Plugin marketplace: Let the community build custom agents (e.g., a Ticketing Agent, a Volunteer Scheduler, a Social Media Poster) that plug into the registry.

Built With

  • aiohttp
  • asyncio
  • discord-bot
  • fastapi
  • framer-motion
  • gemini
  • gmail-api
  • google-custom-search
  • google-oauth
  • jwt
  • mongodb-atlas
  • motor
  • pydantic
  • python
  • python-dateutil
  • radix-ui
  • react
  • react-query
  • react-router
  • recharts
  • shadn/ui
  • sonner
  • sse-starlette
  • tailwindcss
  • typescript
  • uvicorn
  • vite
  • vultr
  • webhook
Share this project:

Updates