Reflectif — Project Story
💡 Inspiration
We realized that people are surprisingly bad at recognizing their own emotions in real time. This changes in mood can significantly worsen the mental health, and identification of this changes can prevent this issues.
This is why Reflectif exists, it catches what you miss.
Existing tools don't help:
- Journaling is retroactive and biased by your own memory.
- Mood trackers rely on self-reporting, which is flawed.
- Chat-based AI analyzes what you type, but misses how you feel.
People are to lazy to track them on their own, therefore, we can help with it.
We built Reflectif to be the passive emotional intelligence layer for your life. Because you don't always know how you feel, but your voice does.
🚀 What it does
Reflectif is an AI-powered emotional health companion that listens to your conversations and provides deep, data-driven insights into your emotional patterns.
- Records & Analyzes: It captures audio (with consent) and analyzes vocal prosody (tone, pitch, rhythm) alongside text content.
- Emotional Breakdown: After a conversation, it gives you a phase-by-phase breakdown of how you felt and how you impacted others
- EQ Training: It tracks your Emotional Intelligence (EQ) growth over time, measuring self-awareness, empathy, and regulation.
- AI Therapist: You can chat with an AI persona that has full context of your emotional history to explore patterns ("Why do I always get anxious in Monday meetings?").
It’s not just a mood tracker; it’s an EQ Trainer.
🛠️ How we built it
We built Reflectif as a modern, high-performance web application designed to feel like a premium tool:
- Frontend: Built with Next.js 14 (App Router) and React for a snappy, responsive experience.
- Voice AI: We integrated Hume.ai for its advanced prosody models, capable of detecting 48 distinct emotions from vocal cues alone.
- Backend Logic: We used Next.js API Routes to handle data processing and aggregation.
- Data & Insights: We implemented a local-first SQL database to store conversation analyses and generate global trend graphs efficiently.
- AI Synthesis: We used LLMs to synthesize the raw emotional data into human-readable insights and to power the conversational "Psychology Professor" persona.
Challenges we ran into
- The "Zero-State" Problem: Making the app feel valuable before a user records their first conversation required designing a compelling onboarding flow and empty states that educate the user.
🏆 Accomplishments that we're proud of
- Multi-Speaker Isolation: We're proud of handling complex, multi-person conversations. The system intelligently identifies speakers and isolates the user's specific sentiment, ensuring the data reflects their emotional state and not just the general mood of the room.
Built With
- backboard.io
- nextjs
- python
Log in or sign up for Devpost to join the conversation.