🧠 Inspiration
Brainstorming is often limited by perspective, cognitive biases, or the lack of instant feedback. We wanted to reimagine collaboration by introducing multiple AI agents, each simulating a unique role—such as a creative thinker, a skeptic, a researcher, or a strategist. The idea: empower solo or group ideation with a virtual panel of intelligent minds working together on a shared canvas, just like human collaborators would in a productive whiteboard session.
💡 What it does
Discussion Companion is a multi-agent system that helps you brainstorm better. On a live canvas, each AI agent plays a distinct role and contributes ideas, questions, critiques, or suggestions in real-time based on the topic or input. You can:
- Drop a prompt or idea onto the canvas
- Watch as multiple agents independently respond, argue, or build upon each other’s thoughts
- Guide the flow by tagging inputs, upvoting suggestions, or reshaping the discussion
It's like having a brainstorming squad that never runs out of energy or creativity.
🛠️ How we built it
- Frontend: Built a shared collaborative canvas using [React / Next.js / or your stack] with drag-and-drop and real-time updates.
- Backend: Managed the agents and session logic via a [FastAPI / Node.js / etc.] service.
- LLMs: Used OpenAI's GPT models to implement distinct "agent personas" with system prompts tailored for roles (e.g., Optimist, Critic, Analyst, Innovator).
- Agent Coordination: Designed a light agent-orchestration layer to simulate dialogue or debate between agents.
- WebSocket / Polling: Enabled live feedback and discussion updates between agents and user.
🚧 Challenges we ran into
- Balancing agent autonomy vs relevance—we had to tune prompts and timing so agents wouldn't drift too far off-topic or overwhelm the canvas.
- Managing multi-agent turn-taking and collisions in real-time on a shared space.
- Preventing redundancy—ensuring agents build on or challenge each other’s points, not repeat them.
- Latency in multi-agent responses when coordinating across asynchronous LLM calls.
🏆 Accomplishments that we're proud of
- Created an intuitive, interactive canvas-based brainstorming experience with live multi-agent interaction.
- Gave each agent a clear, unique voice that added real value to the discussion.
- Made the tool genuinely helpful for unblocking creative thinking, not just a demo.
- Learned how to simulate multi-agent collaboration in a meaningful and structured way.
📚 What we learned
- Prompt engineering is only half the battle—orchestration logic is critical to making agents useful and non-redundant.
- Role diversity among agents helps users see blind spots and develop more well-rounded ideas.
- Visual interaction (canvas) boosts engagement and retention over plain text threads.
- Users want both structured exploration (e.g., SWOT, Six Thinking Hats) and freeform modes depending on their goals.
🚀 What's next for Discussion Companion
- Add voice input and summarization to make sessions more fluid and accessible.
- Enable agent customization so users can define their own team (e.g., "UX expert", "Product Manager", etc.).
- Add memory + retrieval so agents can refer back to past discussions or research.
- Integrate with tools like Miro, Notion, or Google Docs for seamless workflow adoption.
- Explore academic and corporate use cases—ideation, research planning, product strategy, and more.
Log in or sign up for Devpost to join the conversation.