G'dayMood: The Evolution of a Digital Soulmate

Inspiration

G'dayMood was born in the trenches of final exam week. Amidst the mounting pressure of textbooks and the isolation of high-stakes testing, I realized that simple mood trackers weren't enough. I yearned for an emotional companion—a "Neural Guardian" that could offer genuine encouragement and bridge the gap between digital data and human warmth.

I've always believed that food is the ultimate comfort, and sensory experiences are deeply tied to psychological recovery. This insight led to the fusion of an emotional diary with a food-centric discovery engine, transforming a stressful exam period into the starting point for a spatial healing interface.

What it does

G'dayMood is a specialized iPhone PWA designed to be a 24/7 emotional sanctuary.

  • Neural Guardianship: Users adopt or create a custom 3D AI guardian that grows with them, powered by Gemini 2.5 Flash Image.
  • Emotional Resilience: It analyzes daily moods and provides a "Positive Feed" of global news headlines to break cycles of anxiety.
  • Sensory Healing: Using Maps Grounding, it recommends real-world "sensory destinations"—restaurants, cafes, and comfort spots—based on the user's current emotional frequency.
  • Live Tree Hole: A millisecond-latency voice interface using Gemini Live API allows users to speak their worries into a "Tree Hole" and receive empathetic, human-like vocal responses.
  • Deep Insights: Gemini 3 Pro acts as a digital biographer, identifying long-term emotional patterns and "Neural Volatility" that traditional apps miss.

How we built it

The application is built as a high-fidelity Progressive Web App (PWA) optimized for the iPhone experience.

  • Frontend: React + Tailwind CSS, with a "Glassmorphism" design language to match the iOS aesthetic.
  • The Brain (Gemini 3): We leveraged Gemini 3 Pro for long-term pattern analysis and Gemini 3 Flash for real-time logic and pattern alerts.
  • Multimodal Core: We used the Live API for native audio interaction and Gemini 2.5 Flash-Image for generating high-end, Pixar-style 3D avatars.
  • Spatial Intelligence: Integrated Google Maps Grounding to ensure that every recommendation is a real-world coordinate, preventing the "AI hallucination" of non-existent places.

Technologies Used

  • Frontend: React 19, Tailwind CSS, Recharts.
  • AI Engine: @google/genai (Google Gemini API).
  • Models: Gemini 3 Pro/Flash, Gemini 2.5 Flash, Gemini 2.5 Flash-Image, Gemini Live API.
  • APIs: Google Maps Grounding, Google Search Grounding.
  • Web APIs: Web Audio API (PCM), Geolocation API, MediaDevices API.
  • Platform: PWA (Progressive Web App) for iOS.

Challenges we ran into

The journey was a constant battle between logic and the inherent "randomness" of AI.

  • Unpredictability: While AI's creative nature provided "limitless surprises" in its empathy, it created significant time costs for prompt engineering to ensure a stable User Experience.
  • AI Hallucinations: In early versions, the model would invent poetic but non-existent restaurants. We had to implement strict constraints and multi-step verification to ground the AI in actual geographic data.
  • iOS Limitations: Handling raw PCM audio streams for the Live API on Safari/iOS required careful tuning of the AudioContext to bypass typical mobile browser restrictions.

Accomplishments that we're proud of

  • Multimodal Synergy: We successfully combined voice, vision, text, and spatial data into a single, cohesive "Soul System."
  • The "Vibe Match" Logic: Creating a system that doesn't just find food, but finds the feeling of a place that resonates with a specific mood (e.g., matching "Anxiety" with "Warm, Quiet, Dimly-lit Cafes").
  • Latency Mastery: Achieving near-human response times in the voice interface, making the AI feel like a living entity rather than a search engine.

What we learned

  • AI Compliance & Ethics: Dealing with sensitive emotional data taught us the importance of "Local-First" storage and transparent neural analysis.
  • The Power of Grounding: We learned that AI is most powerful when it is anchored to the real world—whether that's through a news feed, a map coordinate, or a real song recommendation.
  • User Centricity: We discovered that during times of high stress (like finals week), users don't want "data points"—they want resonance.

What's next for G'dayMood

  • Veo Animation: We plan to integrate the Veo video model to bring the 3D guardians to life with cinematic, 1080p emotional animations.
  • Biometric Integration: Connecting with Apple HealthKit to correlate heart rate and sleep patterns with emotional volatility.
  • Collaborative Healing: Introducing a "Resonance Room" where users in similar emotional states can share their positive news feeds and comfort-food discoveries anonymously.

Built With

Share this project:

Updates