🪞 MoodMirror AI — From Emotion to Action

💡 Inspiration

We've all had those days where we feel something deeply but can't quite put it into words — a heaviness, a restlessness, or an excitement that has nowhere to go. Most apps ask you what you want. MoodMirror AI asks how you feel first — and then figures out what you need.

The idea came from a simple question: what if your phone could read your face, understand your emotion, and instantly respond with music, motivation, and even food suggestions tailored to your mood?

We were also inspired by how disconnected most wellness and lifestyle apps are. You have a music app, a food delivery app, a journaling app — but nothing that ties them together through emotion. MoodMirror AI is our attempt to build that bridge.


🔨 How We Built It

MoodMirror AI is a fully browser-based web app — no installation needed. Here's the tech stack:

Layer Technology
Frontend HTML5, CSS3, Vanilla JavaScript
Face Detection face-api.js (TinyFaceDetector + FaceExpressionNet)
AI Brain Google Gemini API (gemini-2.0-flash)
Voice I/O Web Speech API (built into Chrome)
Fonts Google Fonts — Syne + DM Sans

The Two Core Systems

1. Mood Mirror

  • The webcam feed is processed in real-time using face-api.js neural network models
  • Every 1.2 seconds, the app detects the dominant facial expression (happy, sad, angry, fearful, disgusted, surprised, neutral)
  • The detected emotion is mapped to a mood label and passed to Gemini with any extra context the user types or speaks
  • Gemini returns a JSON object with: mood summary, 3 song recommendations, a meme concept, a motivational quote, and a 4-step action plan
  • The entire page background shifts color smoothly based on the detected mood
  1. Food Genie**
  2. Users type or tap a craving (e.g. "something spicy", "comfort food")
  3. Their current mood is automatically passed as context to Gemini
  4. Gemini returns 5–6 personalized food suggestions with descriptions and tags
  5. Voice output announces the top picks using the Web Speech API

The math behind confidence thresholding for mood auto-selection:

$$\text{Auto-select mood} = \begin{cases} \text{true} & \text{if } \text{confidence} > 0.55 \ \text{false} & \text{otherwise} \end{cases}$$


🧠 What We Learned

  • face-api.js is surprisingly powerful for a lightweight browser library — running real-time emotion detection with less than 100 lines of code
  • Prompt engineering matters a lot — getting Gemini to respond in strict JSON every time required careful instruction design
  • Web Speech API is available natively in Chrome with zero dependencies — no third-party library needed for voice input or output
  • UX matters as much as tech — making the mood background shift colors as a visual signal made the app feel alive in a way no feature list could
  • How to integrate multiple AI systems (face detection + language model + speech) into one seamless user experience

🚧 Challenges We Faced

1. Face-api.js model loading The neural network model files (~6MB) need to load from a CDN before detection works. On slow connections this caused confusing blank states. We solved this by showing a live status indicator and gracefully degrading if models fail.

2. Getting consistent JSON from Gemini Gemini sometimes wrapped JSON in markdown code blocks (`json) even when we told it not to. We added a post-processing step to strip those before parsing.

3. Camera mirroring + face bounding box alignment The video feed is CSS-mirrored (so it feels like a selfie). But the canvas overlay for the face box also needed to be mirrored independently, or the box would appear on the wrong side of the face. Getting the coordinate transform right took several attempts.

4. Mood context accuracy face-api.js is good, but not perfect — a neutral resting face sometimes reads as "sad". We added a 55% confidence threshold so the app only auto-selects a mood when it's reasonably certain, letting users override with the manual buttons anytime.


🚀 What's Next

  • Giphy Integration — show a real GIF in the Meme Vibes card
  • Spotify Playback — actually play mood-matched songs inside the app
  • Mood History Timeline — track your emotional journey across a day with a chart
  • Zomato / Swiggy Deep Links — one tap to order the suggested food
  • PWA Support — install MoodMirror AI to your phone home screen like a native app

🛠️ Built With

html css javascript face-api.js google-gemini web-speech-api google-fonts

Share this project:

Updates