Lyra AI
Inspiration The idea for Lyra AI came from something personal. We saw people around us struggling with anxiety, loneliness, and the weight of thoughts they didn’t know how to share. Even those who wanted help often couldn’t afford therapy or didn’t feel comfortable opening up to a stranger in a clinical setting.
We realized that what people needed wasn’t necessarily a diagnosis—they just needed someone to talk to. Someone who would listen without judgment, at any hour, in any place. That’s when we began imagining a voice-to-voice therapy companion that could feel human, comforting, and present.
That became Lyra AI.
What it does Lyra AI is a voice-to-voice mental health companion that allows people to speak openly and receive compassionate, thoughtful responses—without typing or navigating menus. It listens actively, responds in real time, and creates a space where users can process emotions out loud, just as they would with a trusted friend or therapist.
The goal isn’t to replace therapy, but to fill a gap—those quiet moments when someone just needs to be heard.
How we built it We started by experimenting with a basic prototype using Bolt.new, which allowed us to quickly test our voice interface ideas and validate that people actually wanted to talk, not type. That early feedback shaped everything.
From there, we moved into a dedicated development environment where we integrated real-time speech-to-text, emotional tone analysis, and conversational AI models designed to feel supportive, not robotic. We paid special attention to the pacing, tone, and natural flow of conversation. Every decision—from word choice to silence between responses—was made to make Lyra feel more human and less like a script.
Challenges we ran into Building a believable and supportive voice conversation isn’t easy. One major challenge was reducing latency so conversations felt natural. Another was training the AI to pick up on subtle emotional cues and respond with the right balance of empathy and clarity.
We also had to carefully design for privacy and trust. If users didn’t feel safe speaking to Lyra, the whole idea would fall apart. It took a lot of iteration to get that balance right.
Accomplishments that we're proud of* We're proud that we turned an idea into a real experience that helps people feel heard. Seeing users naturally open up during voice sessions—even during our early tests—was incredibly rewarding.
We’ve also received messages from people who said Lyra gave them a space they didn’t know they needed. That’s been our biggest accomplishment: creating something that makes a real emotional difference.
What we learned Voice is powerful. It creates trust in a way text can’t. We also learned that empathy isn’t just about what you say—it’s about how you say it, when you pause, and whether the other person feels you’re truly listening.
We learned to design not just for function, but for feeling. And we learned that building something for mental health means moving slowly, listening closely, and testing everything with care.
What’s next for Lyra AI We’re working on several new features and expansions. A dedicated mobile app is in the works, making Lyra even more accessible on the go. We’re also building emotional memory—so Lyra can remember past sessions (with consent) to create more meaningful continuity.
Other plans include multilingual voice support, offline access for low-connectivity areas, and wearable integrations that could allow Lyra to respond to real-time emotional or physical cues.
We’re also exploring partnerships with therapists and mental health professionals to make sure Lyra continues to be safe, responsible, and useful as a bridge—not a replacement—for care.
Lyra AI started as a response to silence. And we’re just getting started.
Built With
- bolt
- claude
- elevenlabs
- supabase
Log in or sign up for Devpost to join the conversation.