Inspiration
Council AI was inspired by a simple frustration: most AI products give you a single answer too quickly. For real decisions, that is often the wrong interaction model. High-stakes choices usually need tension, disagreement, pressure-testing, and multiple frames of thought before action. We wanted to build something that feels less like asking a chatbot for advice and more like putting your thinking on trial.
The original idea came from combining three observations. First, the best decisions are often made in environments where different perspectives actively challenge each other. Second, voice creates a very different emotional experience than text alone; hearing disagreement, caution, confidence, and reflection makes advice feel much more real. Third, most multi-agent AI products still look and feel like dashboards or wrappers around LLMs. We wanted a stronger product metaphor: a live council of AI personas that examine a dilemma from different angles and deliver a structured ruling.
That led to Council AI: an adversarial decision-intelligence product where a user submits a dilemma, a panel of AI counselors debates it, and the system returns a verdict, rationale, and next step. The goal is not generic productivity. The goal is clarity under pressure.
What it does
Council AI is an AI-powered decision chamber. A user enters a real question, dilemma, or strategic choice along with optional supporting context. The system then generates a structured hearing where a set of distinct AI counselors respond in role.
Each counselor represents a different decision-making lens. For example:
a Judge frames the case and maintains structure a Prosecutor attacks weak assumptions and surfaces downside risk a Defense voice argues for upside and strategic opportunity an Operator focuses on execution reality and constraints a Psyche voice exposes emotional motives, fear, and self-deception a Future voice projects long-term consequences and regret a Jury or ruling voice synthesizes the hearing into a final recommendation
Instead of producing one blob of text, the system generates a staged sequence of role-specific turns. Those turns are shown in the chamber UI and can be spoken aloud using distinct ElevenLabs voices. After the hearing, the user is taken to a dedicated results page where the app presents a final verdict, rationale, and a concrete next action.
From a product standpoint, Council AI is designed to do four things well:
turn ambiguous user prompts into structured decision cases generate intelligent multi-perspective responses tailored to the prompt convert those responses into a cinematic, voice-first experience produce an actionable conclusion rather than a vague summary How we built it
We built Council AI as a modern web app with a clear multi-page product flow:
Home page A startup-style landing page that introduces the product and establishes the tone. Input page A focused page where the user enters their dilemma and supporting context. Chamber page The live hearing experience, where the AI counselors respond in sequence. Results page A separate page that displays the final ruling, rationale, and next action. Frontend
The frontend was built in Next.js with React and TypeScript. We used a component-based architecture to keep the app modular while iterating quickly on the product flow and visual direction. Styling and layout were handled with Tailwind CSS, which allowed us to prototype and refine the visual system rapidly.
For animations and transitions, we used Framer Motion to orchestrate page transitions, chamber state changes, orbital/radial motion, and sequential speaker emphasis. This was especially important because Council AI is not meant to feel like a traditional form-and-result app. The UI needed to reinforce the idea of a staged hearing.
We also used a small set of UI primitives inspired by shadcn-style components for basic structure where necessary, but much of the chamber experience was customized to avoid a generic “dashboard in dark mode” appearance.
Backend / AI generation
The intelligence layer was implemented with the Gemini API using the official Google GenAI SDK. We used Gemini as the reasoning engine for:
generating counselor turns making those turns role-specific and prompt-specific producing the final verdict generating rationale and next action
A key engineering decision was to avoid letting Gemini return freeform prose that the frontend would then try to interpret. Instead, we defined a structured schema for the hearing payload, typically including:
caseTitle context turns[] with speakerId, speakerName, role, and line verdict rationale nextAction
We validated this output server-side using Zod, which gave us a predictable contract between the model output and the UI. That was critical for replacing earlier hardcoded hearing flows with real generated data.
Voice layer
For speech, we integrated ElevenLabs as a server-side text-to-speech layer. Each counselor is mapped to a distinct voice ID through environment variables. When a counselor becomes active in the chamber, the corresponding generated line is sent to an internal voice route, which proxies the request to ElevenLabs and returns audio for playback.
This architecture allowed us to:
keep API keys secure on the server give each counselor a unique vocal identity separate text generation from speech synthesis preserve the chamber’s sequencing and speaker highlighting in the UI Data flow
A simplified version of the app’s runtime flow is:
user submits a dilemma on the input page frontend sends the prompt and context to a server route server calls Gemini and receives a structured hearing payload payload is validated and normalized chamber page renders the generated turns as each turn becomes active, the app requests audio from the ElevenLabs route after the chamber sequence ends, the results page renders the generated verdict, rationale, and next action
We also implemented fallback behavior so the app remains functional even if Gemini or ElevenLabs are unavailable.
Challenges we ran into
The biggest challenge was not generating content — it was turning the project into a coherent product experience.
- Avoiding generic AI-app design
One of the hardest parts was getting the chamber to look intentional. It is very easy for multi-agent AI products to become visually generic:
boxes for every section cards for every agent visible UI scaffolding everywhere dashboard or “AI OS” metaphors
We went through multiple design directions, including agent cards, radial systems, theatrical chamber scenes, shadow-figure layouts, and orbital rings, before finding a direction that better matched the product. This was less of a coding problem and more of a product-design problem.
- Replacing hardcoded flows with real AI generation
Early versions of the chamber were powered by sample arrays and hardcoded demo content so we could iterate on the interface. Once we moved to Gemini, the challenge became making sure the real product flow was actually using generated data instead of silently falling back to old placeholders.
That required auditing:
where hardcoded turns still lived whether the chamber was reading generated payloads whether results pages were using generated verdicts or stale values whether fallback logic was accidentally overriding live responses
- Getting structured model output to behave reliably
LLMs are flexible, but UI flows require consistency. We needed Gemini to produce output that was:
role-consistent concise enough for spoken delivery structurally valid aligned with the frontend’s expectations
This meant building strong prompt instructions and validating outputs with a schema instead of trusting raw model text.
- Coordinating Gemini and ElevenLabs
It was not enough to generate good counselor lines. Those lines also needed to work as spoken dialogue. Some model outputs were too long, too repetitive, or too generic when read aloud. We had to think about line length, cadence, counselor personality, and how the text would sound once converted to voice.
- Maintaining flow across pages
Because the product is multi-step, keeping the experience cohesive across home, input, chamber, and results pages was a major challenge. Each page has a different emotional role, and if even one page drifts too far in style or structure, the product starts feeling like a set of unrelated screens instead of one experience.
Accomplishments that we're proud of
We are most proud of the fact that Council AI is not just “chat with multiple agents.” It is a real product concept with a strong interaction model.
- A clear product metaphor
We built a system where the user’s decision is treated as a case, not just a prompt. That simple framing changes the entire experience. It makes the product feel more consequential, more structured, and more memorable.
- Distinct counselor roles
Instead of having multiple agents that all sound similar, we designed the counselors to represent genuinely different perspectives. That makes the hearing more useful and gives the output stronger internal tension.
- End-to-end AI-powered flow
The product does not stop at generated text. It takes a user’s dilemma, produces structured counselor debate, speaks those responses aloud with distinct voices, and ends with a structured ruling. That full-stack AI interaction is something we are proud of.
- Voice as a core interface, not an add-on
We used ElevenLabs not just as a novelty but as a meaningful layer of the experience. The differences between the voices help the counselors feel like distinct presences rather than text labels.
- Stronger-than-average hackathon product thinking
A lot of hackathon projects have good tech but weak product framing. We pushed hard on:
product identity user flow information hierarchy emotional pacing startup-grade positioning
That made Council AI feel closer to a real product than a raw prototype.
What we learned
We learned that in AI products, the interface model matters as much as the model itself.
- One answer is often the wrong format
Users do not always need a single best answer. For difficult questions, they need structured disagreement. That insight shaped the whole product.
- Voice changes perceived intelligence
A line that feels generic in text can feel much more nuanced when spoken in a distinct voice. Conversely, weak writing becomes very obvious when turned into speech. This forced us to think more carefully about brevity, tone, and personality.
- Product cohesion is difficult but essential
It is easy to make a cool chamber page or a stylish landing page. It is much harder to make every page feel like part of one intentional product world. We learned how important it is to define:
the role of each page the emotional arc of the user journey what visual language should stay consistent what should change by stage
- Fallback logic can quietly undermine AI products
One of the most subtle lessons was that fallback content and hardcoded sample logic can survive much longer than expected during iteration. If you do not audit the real data path carefully, your app can look AI-powered while still behaving like a prototype.
- Constraint improves design
Some of the strongest improvements came when we removed things:
fewer visible panels less interface chrome clearer focal points less clutter competing with the core chamber interaction
That helped the product feel more serious and more intentional.
What's next for Council AI
The next phase of Council AI is about making it deeper, faster, and more productized.
- Better counselor reasoning
We want to improve the intelligence of the hearing by:
tuning counselor prompts further making the roles more distinct making the counselors respond more directly to each other increasing realism in the final verdict
- Stronger voice orchestration
Right now each counselor can have a distinct voice, but we want to improve:
latency pacing turn transitions voice continuity more polished spoken delivery
- Sharper product design
We want to continue refining the UI so every page feels fully intentional and cohesive, especially the chamber and results pages.
- More hearing modes
Future versions could introduce specialized council configurations such as:
Founder Council Career Council Ethics Council Personal Decision Council
Each would use different counselor roles and logic.
- Better output artifacts
We want the results page to become more than a verdict screen. It should evolve into a reusable decision artifact that users can revisit, share, and act on.
- Real-world use cases
Longer term, Council AI could become useful for:
founders testing strategic decisions students evaluating career choices professionals comparing offers teams pressure-testing product bets anyone who wants more than generic advice
The broader vision is to make Council AI a new kind of decision product: not an assistant that answers for you, but a system that helps you think better by forcing your ideas to survive opposition.
Built With
- css
- elevenlabs
- gemini
- javascript
- typescript

Log in or sign up for Devpost to join the conversation.