🫂 About the Project – How’s My Day?

💡 Inspiration

Most apps ask us to be productive, but almost none ask how we actually feel.
We wanted to build something gentle — a single-tap voice check-in that helps people reflect on their mood and feel supported, even when they don’t have the words.

How’s My Day? was inspired by:

  • Burnout in everyday life
  • The emotional fatigue of modern work
  • The growing need for non-intrusive, emotionally intelligent AI

Our goal: help people feel heard in a single voice interaction — no forms, no bots, just a warm human-like response.


🛠️ How We Built It

We combined the power of real-time speech AI and intelligent content retrieval to make the experience feel truly personal:

  • AssemblyAI Universal-Streaming API for low-latency voice-to-text transcription and emotion detection using their speech_understanding feature
  • Algolia MCP Server to store and retrieve human-written emotional support messages using the detected mood as a query
  • Text-to-Speech Output:
    • Primary: AssemblyAI's native TTS
    • Fallback: ElevenLabs API for realistic human voice output
  • Frontend UI built using:
    • HTML + TailwindCSS
    • Vanilla JavaScript for mic control and audio output
  • Optional: Web Speech API to voice out the result locally if needed

📚 What We Learned

  • How to use AssemblyAI’s Speech Understanding to extract emotion directly from voice tone
  • Integrating Algolia MCP for ultra-fast and flexible emotional content delivery
  • Designing for empathy and calm in both UI and copy
  • Handling cross-browser voice streaming + transcription with minimal latency
  • Using fallback strategies (e.g., 11 Labs TTS) to ensure consistent UX

🚧 Challenges We Faced

  • AssemblyAI’s streaming transcription works beautifully — but balancing accuracy and responsiveness was tricky in the browser
  • Emotion detection needed fine-tuning; raw transcript ≠ emotion → so we used AssemblyAI’s audio emotion inference
  • Voice-to-text streaming and TTS needed careful timing to feel smooth and not robotic
  • Writing human-like emotional support messages that don’t feel generic or AI-generated took time and effort

🧮 Future Possibilities

We’re exploring:

  • Letting users journal their moods over time
  • GPT-powered rephrasing of emotional insights
  • Multilingual support using AssemblyAI + AWS Translate
  • AI-generated affirmations that sound truly personal

Thanks for checking out How’s My Day? — a voice that listens, understands, and gently responds.

Built With

Share this project:

Updates