Inspiration
We built BrainWave AI because independent creators have the same creative ambition as large teams, but not the same access to support, tooling, or feedback loops. A lot of creative software is powerful, but it can also be intimidating, fragmented, and full of technical friction. We wanted to build something that helps creators stay in control of their vision while removing the barriers that slow them down.
The biggest idea behind BrainWave is that AI should enhance creativity, not replace it. Instead of making the art for the user, BrainWave helps them plan their content, get unstuck while editing, and understand how audiences may respond to the final result.
What it does
BrainWave AI is a voice-first creative copilot for digital content creators.
It supports creators across three major stages of the workflow:
Planning: Users can brainstorm ideas, scripts, hooks, tone, structure, and content direction with an AI agent. They can also provide reference context to help guide the planning process.
Editing Help: If a creator gets stuck in software like Premiere Pro, CapCut, DaVinci Resolve, iMovie, or other creative tools, BrainWave provides real-time coaching to help them accomplish the effect or result they are imagining.
Audience Analysis: After finishing a piece of content, users can run it through our analysis flow powered by TRIBEv2. BrainWave then adds an interpretation layer on top of the raw model output to explain what the data may mean in a useful, human-readable way, giving creators insight into how their content could land with viewers.
How we built it
We built BrainWave as a web platform with a dashboard-based workflow focused on creators.
On the frontend, we created a landing page and dashboard experience with dedicated tabs for planning, editing help, and analysis. We used voice-first AI interactions to make the experience more accessible and natural for creators who want to think out loud rather than type everything manually.
For the planning and editing-help experiences, we integrated ElevenLabs agents to support live voice-to-voice interaction. This allowed us to build an AI experience that feels more like a creative coach than a standard chatbot.
For the analysis side, we worked on hosting TRIBEv2 ourselves and then built an interpretation layer on top of its output. Rather than only surfacing raw signals or technical model responses, we focused on translating those results into insights creators can actually use to improve their content.
Challenges we ran into
One of our biggest challenges was hosting TRIBEv2 ourselves. This was a completely new experience for us, and getting that part of the system working reliably pushed us outside of our comfort zone. It required us to learn quickly, debug unfamiliar infrastructure issues, and think carefully about how to connect research-style model outputs to a user-facing product.
Another major challenge was designing the interpretation layer. Raw TRIBE output is not automatically useful to the average creator, so we had to figure out how to translate technical data into something meaningful, actionable, and easy to understand. That meant not only handling the model output itself, but also deciding how to present it in a way that helps users improve their work.
We also had to carefully balance AI assistance with creative freedom. It was important to us that BrainWave support the user’s vision without making the process feel rigid or overly automated.
Accomplishments that we're proud of
We are proud that we built BrainWave around a clear principle: help creators create, don’t create for them.
We are especially proud of:
Building a voice-first workflow that makes creative planning and software help feel natural and accessible Creating a product that supports creators across multiple stages, not just one isolated task Successfully taking on the challenge of hosting TRIBEv2 ourselves Building an interpretation layer that turns technical model output into practical audience insight Designing a product that can help smaller creators access workflows that are usually only available to larger teams or companies
What we learned
This project taught us a lot about how difficult it is to turn powerful AI models into a polished user experience. A model alone is not enough. The real value comes from the product layer around it: the workflow, the interface, the interpretation, and the way the user interacts with the system.
We also learned a lot about infrastructure, deployment, debugging live AI systems, and multimodal workflows. Hosting TRIBEv2 ourselves gave us hands-on experience with challenges we had never tackled before.
Most importantly, we learned that creators do not just need “more AI.” They need AI that understands when to guide, when to step back, and how to keep them in control of the creative process.
What's next for BrainWave AI
Our next step is to make BrainWave more complete and robust as a creator platform. We want to:
Expand the analysis experience and make the TRIBEv2 interpretation layer even more insightful
Improve multimodal planning so users can bring in more reference media and context
Add stronger editing-help capabilities with deeper software-specific guidance
Build out history and project memory so creators can return to past sessions and iterate more easily
Continue improving accessibility through voice-first interaction and more flexible input methods
Long term, we want BrainWave to become a true creative copilot that helps independent creators move from idea to execution to audience insight, all while staying in control of their own vision.
Built With
- agents
- api
- css
- elevenlabs
- huggingface
- next.js
- react
- tailwind
- tribev2
- typescript
- vercel
- voice-to-voice
Log in or sign up for Devpost to join the conversation.