Inspiration
One in three people struggle with their mental health, and most never ask for help. Not because they don't want to, but because they don't know how. We built Mood Journal because people shouldn't have to find the words to ask for support.
What it does
You write a short daily entry and rate your mood. The app runs sentiment analysis on your words, has Gemini AI respond like a warm friend, and learns your personal emotional rhythm over time. When your pattern shows a sustained dip, your trusted friends get a quiet notification to check in. No diagnosis, no drama. Just the people who care about you, showing up.
How we built it
Python and Flask backend, PyTorch LSTM that trains on each user's mood history, and Google Gemini for reflection prompts, pattern analysis, and escalation checks. Chart.js handles mood visualization on the frontend, and a local sentiment analyzer runs on every entry instantly.
Challenges we ran into
One of our biggest challenges was actually our prompts feeding to the AI. Initially our prompts made the AI response sound hollow and not useful. We went through many iterations of our prompts, especially for the safety escalation feature, trying to find the right balance between genuinely caring and not alarming someone who is already having a hard day. Secondly, after finishing the general structure, our team spent much time figuring out how to make the app more appealing for users to record their journals. To do this, our team went through three major overhauls of our UI designs and added more features like the streak system in order to encourage users to record their journals. In addition, our sentiment analysis had a critical bug early on: our program would incorrectly flag phrases like "not happy" as "happy," which is positive, treating the negation word as invisible. Once we identified the issue, we implemented proper negation handling so the analyzer correctly interprets context rather than just matching keywords.
Accomplishments that we're proud of
The trusted friend notification system. It removes the biggest barrier to getting support: having to ask for it.
What we learned
We learned that building something that feels human is a lot harder than building something that works. Getting the AI to respond with genuine warmth and less like the AI tone requires far more prompt iteration than we expected.
On the technical side, we learned how much environment setup can silently break things. A library conflict between PyTorch and Anaconda cost us hours of debugging with no error message to guide us, which taught us the value of clean, isolated environments from day one.
We also learned a lot about building a full-stack application end to end as a team, from structuring a Python backend with multiple modules, to connecting it to a frontend, to thinking through the user experience at every step. More importantly, we learned that mental health tools carry real responsibility. Every design choice, how a warning is worded, what color a button is, whether a message feels gentle or alarming, has a human on the other side. That shaped every decision we made.
What's next for Mood Journal
Persistent accounts, real push notifications, and streak-based social features. Long term, anonymized trend data to help universities spot when their communities are collectively struggling.
Log in or sign up for Devpost to join the conversation.