Inspiration

I wanted to build an AI that could understand group conversations, not just one-on-one prompts. Real discussions are chaotic—people interrupt, go silent, or spiral into conflict. I was fascinated by the question: Can an AI manage social flow and guide multiple humans toward consensus? That idea became the foundation for Consensus, a multiplayer AI-driven discussion platform.

What it does

Consensus is a multiplayer chat environment where multiple users talk to one shared AI facilitator. The AI listens to everyone in real time, detects silence, disagreement, or decision moments, and then interjects intelligently. It maintains structured memory for each room, summarizes viewpoints, visualizes the group’s evolving consensus as a graph, and can trigger polls or questions to help people reach agreement faster.

How I built it

I built the backend in Python (FastAPI) with Socket.IO for real-time WebSocket communication and Redis for pub/sub event propagation and rate limiting. The data layer uses Supabase (PostgreSQL) for persistent storage, handling all rooms, messages, votes, and consensus graph nodes/edges.

The AI orchestration layer runs through Letta, which manages memory, reasoning, and tool calls. I registered custom Letta tools (via Pydantic schemas) for functions like update_graph, emit_interjection, and open_poll. Letta calls the JanitorAI JLLM endpoint for completions, which provides up to 25k context length and supports tool-augmented reasoning.

The frontend is built in Next.js (React + TypeScript) with TailwindCSS for styling and Recharts/D3.js for the live consensus graph visualization. I used Stream Chat for user presence and typing indicators, and Supabase Auth for authentication.

For development and demo tunneling, I used ngrok to expose the local backend to public URLs. Everything runs locally with containerized services through Docker Compose, and the app is designed to be deployable to Railway or Vercel.

Challenges I ran into

I struggled to find the right interjection logic—deciding when the AI should speak without being intrusive. Building context windows that stay coherent across multiple users while staying under JLLM’s token limits required dynamic truncation and rolling summaries. Designing Letta tools that correctly exposed schemas to the LLM was also tricky, since improper argument models would cause invalid tool calls. Finally, syncing message streams, poll states, and graph updates through Socket.IO and Redis required careful event choreography.

Accomplishments that I'm proud of

I built an end-to-end multiplayer AI system with real-time synchronization, persistent memory, and contextual reasoning. The AI successfully detects conversation lulls and conflicts and reacts with appropriate facilitation. The live consensus graph dynamically maps how each user’s stance evolves—something I hadn’t seen done before in hackathon projects.

What I learned

I learned how to integrate LLM reasoning frameworks (Letta) with custom tool schemas and real-time infrastructure. I deepened my understanding of event-driven design using FastAPI, Redis, and Socket.IO, and how to combine structured Postgres data with generative AI reasoning. I also learned practical prompting strategies for multi-user context management, and how to orchestrate multiple AI calls efficiently.

What's next

Next, I plan to:

Add voice-based input and output using WebRTC and Whisper for speech processing.

Use Letta’s long-term memory graphs to persist conversation learnings across sessions.

Introduce multiple AI personas (mediator, skeptic, summarizer) that collaborate live.

Deploy on Vercel + Railway, integrate analytics via OpenTelemetry, and open-source the Letta + JLLM orchestration layer as a reusable multiplayer AI framework.

Would you like me to make a concise Devpost version (~250 words) or a slide-deck summary (one-liner per section) next?

Built With

Share this project:

Updates