Inspiration
Traditional LLM chat interfaces are linear and transient. When you want to explore a side topic, you either clutter the main thread or start a million new disconnected chat. We wanted to build a "branching" conversation interface where you can select anything questions you have about the LLM's response to sprout a new "leaf" discussion, preserving the main flow while allowing deep dives.
What it does
Leafy is a web application for navigable, non-linear AI conversations.
- Main Chat: A standard chat interface for your primary discussion.
- Leaf Threads: Select any text in a message and "Ask in new leaf" to spawn a side-conversation about that specific topic. The original text becomes a clickable link.
- Highlights: Color-code important parts of the conversation (Golden ideas, Red issues, etc.) for easy retrieval later.
- Context Engineering: The AI (Gemini) understands the context of where a leaf was created, passing relevant history without duplicating the entire conversation.
- Visual Navigation: A 3-panel layout allows you to view the main thread, the active leaf, and your saved highlights simultaneously.
How we built it
- Frontend: React 18 with TypeScript and Vite for speed. We used TailwindCSS for a clean, modern UI and Zustand for state management.
- Backend: Python FastAPI serving as the bridge to the AI.
- AI: Google's Gemini API for high-quality responses, intent classification, and summarization.
- Database: Supabase (Postgres) to store conversations, leaves, and highlights with relational integrity.
Challenges we ran into
- Context Management: Passing the right amount of context to the AI for a leaf thread was tricky. Too little, and it loses the plot too much, and it gets confused and expensive. We implemented a "Context Pack" system to curate what history gets sent.
- UI Complexity: Managing a multi-column layout where panels open, close, and resize on different screen sizes required careful state management and CSS grid/flexbox logic.
- Real-time Updates: syncing state between the main thread and leaves so that a link appears immediately after creation.
Accomplishments that we're proud of
- We are happy we were able to bring together the frontend, backend, database, and AI, which helped us understand how complex systems communicate and support each other in practice.
- Polished UI: The app doesn't look like a hackathon prototype; it feels like a tool you could use daily.
- Seamless Integration: Supabase and Gemini work together perfectly to provide a fast, smart experience.
What we learned
- LLM Context is Key: The quality of a "sidebar" conversation depends entirely on how well you frame the parent context for the model.
- Supabase is Fast: Setting up auth and database tables with Supabase saved us hours of boilerplate backend work.
- React Query: Essential for keeping the frontend in sync with our backend without complicated manual fetching logic.
What's next for Leafy
- Semantic Long-Term Memory: Implement pgvector embeddings to allow retrieval of relevant context from any past conversation, not just the current session.
- Multi-Agent Summarization: Deploy specialized agents to continuously summarize and refine context in the background, ensuring the 'Context Pack' is always fresh and relevant.
- Knowledge Graph Visualization: Automatically map the relationships between leaves to uncover hidden connections and contradictions in your knowledge base.
- Export to Docs: Turn a conversation tree into a structured document, blog post, or PDF with one click.
Built With
- fastapi
- google-gemini-api
- postgresql-(supabase)
- python
- railway
- react
- react-query
- sql
- supabase
- tailwindcss
- typescript
- vercel
- vite
- zustand
Log in or sign up for Devpost to join the conversation.