Inspiration
I'm the youngest in my family, the only son, born years after my two older sisters. By the time I was growing up, my sisters were already starting families of their own. I watched them give everything to their children, my two nephews, whom I absolutely adore, but I also watched them go through something nobody in our family fully understood.
After their pregnancies, both my sisters quietly struggled. One would cry for no reason she could explain. The other stopped sleeping, stopped eating, stopped being herself, and nobody knew what to say. Our family loved them, but love without understanding isn't always enough.
That curiosity stuck with me. I started reading, not as a student, but as a brother who wanted to get it. I learned that postpartum depression isn't just "feeling sad." It is a clinically recognized medical condition that affects up to 1 in 5 mothers, and yet the majority suffer in silence because society has conditioned women to suppress their pain, smile through the exhaustion, and treat motherhood as something that should come naturally.
When this hackathon came up, I knew exactly what I wanted to build. Research overwhelmingly shows that the single most impactful intervention for postpartum mental health is having someone to talk to, someone who listens, validates, and doesn't judge. That one insight became the foundation of Bloom Today.
What it does
Bloom Today is a postpartum emotional support platform that gives every new mother a personalized AI companion she can talk to, anytime, about anything, through real-time voice and video calls. It is strictly not a therapy or therapist app; the AI companions never give clinical advice, unless the woman's therapist/doctor asks them to.
But Bloom Today isn't just another chatbot. Here's what makes it different:
Full Personalization: A mother can name her companion, choose its voice, customize its personality, and give it specific instructions. She shapes who she talks to, not the other way around. It's her companion, on her terms.
Real-Time Voice & Video Calls: Using the Gemini Live API with native audio streaming, mothers can have natural, conversational phone calls with their companion. It greets them by name. It remembers past conversations. It listens with warmth and responds with care. The companion appears as a 3D animated avatar (powered by Three.js and the TalkingHead library) with lip-sync and contextual gestures, nodding when she shares something heavy, smiling when she laughs.
Therapist Integration: Because postpartum depression is a medical condition, Bloom Today lets the mother connect her therapist through a secure key. The therapist can send instructions directly to the AI companion (shaping how it responds to her), view a dedicated dashboard with conversation analysis powered by Gemini's structured output, and send a note to the mother through the app.
Trusted Person System: Many women don't tell their partners, mothers, or friends what they're really going through, because they've been conditioned to suppress their emotions and go through things in silence. With Bloom Today, a mother can connect a trusted person via a shareable key. That trusted person gets their own dashboard, written in plain, everyday language, with a status label, a summary of how she's doing, and gentle, actionable suggestions generated by Gemini AI, like "Bring her favorite tea tonight" or "Ask her about the baby's feeding, she mentioned it was stressful."
Intelligent Insights Dashboard: Every conversation is analyzed by Gemini to extract 8 signal scores (mood, energy, sleep, stress, support, self-kindness, coping, bonding), identify themes, track mood direction, and generate personalized quick tips and YouTube resource recommendations, all without a single form or questionnaire.
Bloom Score & Streak: A gamified engagement score that measures how consistently a mother is seeking help, not her mental state. The score is weighted: 55% from showing up (weekly check-ins and streak), 30% from support connection, and 15% from self-kindness. If she misses a day, her streak breaks and the score naturally decays, gently encouraging consistent engagement. Because the act of reaching out is itself an achievement worth celebrating.
How I built it
Bloom Today is a full-stack JavaScript application I built as a solo developer. Here's the architecture:
| Layer | Technology |
|---|---|
| Frontend | React 18 + Vite 7 (mobile-first SPA) |
| Styling | Tailwind CSS v4 + Radix UI primitives |
| 3D Avatar | Three.js + TalkingHead library (viseme-based lip-sync, gesture mapping) |
| Real-Time Voice/Video | Gemini Live API (@google/genai) via custom WebSocket client |
| Backend | Express.js v5 (Node.js 20) |
| AI Agent | Google Agent Development Kit (@google/adk) with LlmAgent + InMemoryRunner |
| Analysis Engine | Gemini 2.5 Pro with structured JSON output (responseSchema) |
| Database | PostgreSQL on Neon (serverless) |
| Auth | Google OAuth 2.0 + JWT |
| Deployment | Docker to Google Cloud Run via Cloud Build + Artifact Registry |
| Secrets | Google Secret Manager |
The Companion Pipeline
Onboarding: The mother names her companion, chooses a voice (e.g., Aoede), optionally writes personality instructions, and selects a 3D avatar.
Call Initiation: The frontend opens a WebSocket to the Gemini Live API with a rich system instruction that includes the companion name, personality, therapist guidance, and personal memories from past conversations.
Audio Streaming: A custom AudioRecorder captures 16kHz PCM from the mic, streams it as base64 chunks via sendRealtimeInput(). Incoming audio is decoded by AudioStreamer and played through the Web Audio API.
3D Avatar: The TalkingHead instance renders a Three.js ReadyPlayerMe avatar. A custom procedural viseme engine pushes phoneme animations into the avatar's animation queue in sync with AI audio output. A gesture mapper parses AI transcript text in real-time and triggers contextual gestures (nods, smiles, empathy expressions).
Barge-In: Server-side VAD (Voice Activity Detection) handles interruptions. When the user speaks over the AI, all queued audio is flushed, the viseme engine stops, and the model responds only to the latest user input.
Post-Call Analysis: Transcripts are saved per-turn to PostgreSQL. On call end, Gemini 2.5 Pro analyzes the full transcript with a structured schema producing signal scores, risk assessment, therapist notes, and trusted person guidance, all in one pass.
The Multi-Role System
Every user connects via a unique 8-character key (generated with nanoid). The system supports three roles:
Mom: Full access to companion, dashboard, journal, and insights.
Therapist: Connected via key. Sees a dedicated dashboard with conversation analysis and risk signals. Can inject instructions into the AI companion and send notes to the mother.
Trusted Person: Connected via key. Sees a simplified, warm dashboard with a status label and actionable suggestions, with zero jargon.
The dashboard insights engine runs parallel Gemini calls for per-call analysis, daily/weekly/monthly rollups, personalized quick tips, and YouTube resource recommendations, with automatic retries, model fallback, and oEmbed validation for video links.
Challenges I ran into
1. Designing the Dashboard Through Research
I didn't just throw multiple metrics onto a screen. I spent significant time researching what actually matters for postpartum mental health: which signals are meaningful, how to present emotional data without making it feel like a medical report, and how to keep the interface light enough that a tired, overwhelmed mother would actually want to open it.
I studied scales like the PHQ-9 and the Edinburgh Postnatal Depression Scale to understand what signals to track, then deliberately translated all of that into warm, non-intimidating language.
Building three different dashboard views (one for the mom, one for the therapist, one for the trusted person) from the same conversation data, each with completely different tone and vocabulary, required a lot of iteration on both the Gemini prompts and the frontend presentation.
The mom sees encouragement, the therapist sees structured analysis, and the trusted person sees simple, actionable guidance.
Getting that balance right was one of the most time-consuming parts of the project.
Accomplishments that I'm proud of
A 3D companion that feels alive: The procedural lip-sync engine, combined with the gesture mapper that triggers contextual expressions (nodding during heavy moments, smiling during light ones), makes the companion feel genuinely present, not like a chatbot with a face strapped on.
Three dashboards, one conversation: From a single voice call transcript, Bloom Today generates three completely different views: a warm, encouraging reflection for the mom; a structured analysis for the therapist; and a plain-language action guide for the trusted person. All from one Gemini call with structured output.
The trusted person system: This is the feature I'm most proud of. Many postpartum women suffer in silence because they don't know how to ask for help, or they feel guilty for needing it. Bloom Today lets someone who cares see what's really going on, in language they can actually understand, and tells them exactly what they can do. No guilt. No awkwardness. Just "bring her tea tonight."
Full personalization of the AI companion: Letting the mother name, voice, and instruct her own companion, and then letting her therapist layer guidance on top, creates a support experience that feels both deeply personal and professionally informed.
What I learned
Postpartum depression is not "baby blues." It's a condition that deserves the same engineering rigor we give to any health problem. Building Bloom Today forced me to deeply understand the landscape (the PHQ-9, Edinburgh Postnatal Depression Scale, risk stratification) and then translate all of that into language that a tired, overwhelmed, first-time mom would actually find comforting instead of terrifying.
The Gemini Live API is remarkably capable for building real-time conversational agents, but it requires careful engineering around its streaming model, especially around turn management and the gap between "model done generating" and "audio done playing."
Google ADK simplifies agent creation significantly. My companion agent is defined in ~40 lines of code, with session management, conversation memory, and instruction injection handled by the framework.
Structured output from Gemini is a game-changer for building data-driven dashboards from unstructured conversations. The combination of responseSchema + post-processing normalization gives reliable, typed JSON even from complex emotional analysis prompts.
Design matters as much as engineering when your user is a vulnerable human. Every color choice, every word, every animation in Bloom Today was chosen to feel warm, safe, and non-intimidating, because the last thing a struggling mother needs is an app that feels like a hospital form.
What's next for Bloom Today
Persistent memory across calls: Using vector embeddings to give the companion long-term memory of past conversations, so it can naturally reference things the mother shared weeks ago.
Mood journaling: A voice-first journal that captures daily reflections and feeds into the insight engine.
Native mobile app: Wrapping the PWA in a React Native shell for push notifications and background audio.
Built With
- css
- docker
- embla-carousel
- express.js
- google-adk
- google-artifact-registry
- google-cloud
- google-cloud-build
- google-cloud-run
- google-gemini-api
- google-gemini-live-api
- google-oauth-2.0
- google-secret-manager
- html
- javascript
- jwt
- neon
- node.js
- postgresql
- radix-ui
- react
- recharts
- sql
- tailwind-css
- three.js
- vite
- zod


Log in or sign up for Devpost to join the conversation.