Inspiration
We were inspired to solve the issue we found of the lack of engagement felt with some existing integratable AI agents offered by companies like Slack and Zoom, where these agents fall short in natural language conversational abilities that would otherwise improve productivity and overall user enjoyment. Additionally, these agents often struggle when interacting with multiple users at once, resulting in a user experience that feels like a locked-in, 1-on-1 questionnaire session vs. a natural extension of the overall conversation.
What it does
At its core, Mixer AI represents a technology demo allowing single LLMs to maintain contextual understanding of the wider conversation between multiple users. MixerAI presents a platform where multiple users can interact with each other and shared contextually aware AI agent, fostering more efficient workflows and social communication. From support agents, tutoring, and brainstorming to collaborative storytelling and roleplay adventures, our customizable AI system adapts to your scenario, creating a versatile space where every session is unique.
How we built it
- React/React-router for the front-end
- Sockets for efficient communication between all users and AI agents
- Express.js for our back-end
- RAG pipeline for maintaining user detail consistency
Challenges we ran into
How to orchestrate a conversation between multiple users with the same LLM agent while maintaining a coherent and real-time understanding of each user's context in their part of the conversation. We found that basic LLMs struggled with this task, as they often hallucinated and mixed details betweenn distinct users. We overcame this challenge by augmenting our LLMs responses through a RAG-like user-detail fetching system, allowing the AI agent to consistently keep contextual awareness of each user's role and participation in the wider conversation.
Accomplishments that we're proud of
Building an efficient system that allows the AI Agent to maintain memory and contextual awareness of all users simultaneously, allowing multi-user conversations to flow naturally.
What we learned
We very quickly learned the limits when it comes it to LLMs in their ability to juggle multiple discrete threads at the same time. While issues like these are commonly solved through using multiple agents split up between tasks, these methods can be costly in token use so finding a way to assist a single model in effeciently managing several tasks at once is significant.
What's next for Mixer AI
In the future, we plan to incrementally improve our user-detail fetching system to collect more granular details about users over time, such as specific interests, events, etc.
Built With
- cursor
- fishai
- janitorai
- react

Log in or sign up for Devpost to join the conversation.