Inspiration

Product Managers today are drowning in noise. Their "truth" is scattered across six different tabs: tickets in Linear, code in GitHub, specs in Notion, and endless threads in Slack. To make a single decision, a PM has to context switch dozens of times a day, manually copy-pasting data to piece together a picture of what's actually happening.

We realized that PMs don't need another chatbot that just "talks" about work. They need an Integrated Development Environment (IDE) for product strategy—a tool that can read the code, check the database, and execute the plan across their entire stack.

What it does

Scope AI is an AI-native "Operating System" for Product Managers. It acts as a unified intelligence layer on top of your existing tools (Linear, GitHub, Slack, Notion, Gmail, and Supabase).

It operates in three distinct modes:

  1. Synthesis (The Insight Engine): Instead of reading 100 Slack threads, Scope ingests them, clusters them by semantic meaning, and presents "Insight Cards." It turns noise (e.g., "Login is slow") into data ("50 users reported latency in auth.ts").
  2. Drafting (The Strategy Editor): A context-aware document editor. When you write a spec, Scope can pull in live citations. If you mention a bug, it links the Linear ticket. If you mention a feature, it reads the actual GitHub code to verify technical feasibility.
  3. Orchestration (The Action Layer): Scope turns text plans into API calls. With one click, it can generate 10 Linear tickets, draft a Slack announcement, and open a GitHub PR—all synced and ready for review.

How we built it

We architected Scope using the "Router-Pipeline" pattern to ensure reliability over autonomy.

  • Frontend: Built with Next.js 15 and a custom polymorphic UI that changes shape based on the user's mode (Grid for synthesis, Split-pane for drafting).
  • The Brain (AI): We used Dedalus Labs to orchestrate our AI agents. Dedalus manages the Model Context Protocol (MCP) connections, allowing Claude 3.5 Sonnet to securely "speak" the APIs of Linear, GitHub, and Slack without us writing boilerplate integration code.
  • Memory: We used Supabase with pgvector for Hybrid Search. We store "Signals" (tickets, chats, file chunks) and use metadata filtering combined with vector similarity to find the needle in the haystack.
  • Reliability: To solve the "60-second timeout" problem of Vercel functions, we used Trigger.dev (v3). Complex workflows (like "Synthesize the last month of Linear tickets") are offloaded to background jobs that run durably, streaming updates back to the UI.

Challenges we ran into

  • The "Context Window" Trap: Feeding 5,000 tickets into an LLM is slow and expensive. We had to implement a Hybrid Search strategy (Vector Search + SQL Metadata Filters) to retrieve only the relevant context before sending it to the model.
  • Rendering UI from Text: We didn't want the AI to just output text blocks. We solved this by defining "UI Tools" (e.g., render_insight_card, render_linear_draft). The AI calls these "functions," and our frontend intercepts them to render interactive React components instead of text.
  • Hallucinations vs. Reality: Early versions would invent ticket statuses. We fixed this by implementing a "Live Fetch" pattern: the Vector DB finds the Ticket ID, but the Agent must call the live Linear API to verify the current status before answering.

Accomplishments that we're proud of

  • Seamless "Action" Layer: We moved beyond "Chat." Scope doesn't just suggest tickets; it creates them. The Orchestration mode feels like magic because it handles the messy API payloads perfectly using structured JSON generation.
  • The Polymorphic Interface: The app feels like a pro tool, not a chat wrapper. The way the Center Panel morphs from a Masonry Grid (Synthesis) to a Doc Editor (Drafting) makes the AI feel integrated into the workflow, not bolted on.
  • Smart File Ingestion: Users can drag a PDF (like user interview notes) into a chat, and Scope instantly indexes it, treats it as a temporary "Signal," and cross-references it against existing Linear tickets.

What we learned

  • Agents need structure. Purely autonomous agents often get lost. We learned that a Router (classifying intent) + Workflow (hard-coded TypeScript steps) + Worker (AI doing specific tasks) is vastly superior to a "Loop until done" agent.
  • Data freshness is everything. A vector database is stale the moment you index it. Real-time verification against the source APIs (Linear/GitHub) is non-negotiable for a professional tool.

What's next for Scope AI

  • GraphRAG: Moving beyond vector similarity to understand relationships (e.g., "Person A usually works on Component B").
  • More Integrations: Adding Intercom for customer support signals and Figma for design context.
  • Multi-Player Mode: allowing teams to collaborate on a "Scope" in real-time.

Built With

  • anthropic
  • claudecode
  • dedalus
  • githubapi
  • gmailapi
  • linearapi
  • mcp
  • nextjs
  • notionapi
  • pgvector
  • rag
  • react
  • slackapi
  • supabase
  • tailwind
  • trigger.dev
Share this project:

Updates