Inspiration
Art speaks when words fail. But what if a painting could speak directly to me? MoodMuse was born from that question. During my class AI in Action, I wanted to build something emotionally intelligent — not just functional or impressive, but meaningful. I’ve always believed that art holds emotional power, and I wondered if I could use AI to bring that power closer to our daily lives. What if I could match someone’s mood with a public domain masterpiece? What if technology could feel… gentle? That became the heart of MoodMuse — an emotionally aware art companion designed for reflection, connection, and a little bit of daily beauty.
What it does
MoodMuse invites you to check in with yourself each day. You select how you’re feeling, write a short journal entry, and the app recommends an artwork from the MET (Metropolitan Museum of Art) that emotionally resonates with your state — either by mirroring it or offering balance. Alongside the artwork, I include a calming phrase and gentle visuals that make the experience feel slow and intentional. You can revisit past moods, build your own collection of resonant artworks, and track your emotional journey over time. I also see MoodMuse as a modular framework: its pipeline could be reused or adapted to support emotionally intelligent experiences in education, therapy, or even museum engagement. The whole interface was carefully designed for emotional safety — warm tones, minimal distractions, and soft transitions.
How I built it
MoodMuse invites you to check in with yourself each day. You select how you’re feeling, write a short journal entry, and the app recommends an artwork from the MET (Metropolitan Museum of Art) that emotionally resonates with your state — either by mirroring it or offering balance. Alongside the artwork, I include a calming phrase and gentle visuals that make the experience feel slow and intentional. You can revisit past moods, build your own collection of resonant artworks, and track your emotional journey over time. I also see MoodMuse as a modular framework: its pipeline could be reused or adapted to support emotionally intelligent experiences in education, therapy, or even museum engagement. The whole interface was carefully designed for emotional safety — warm tones, minimal distractions, and soft transitions.
Challenges I ran into
One of the biggest challenges was processing such a large volume of artwork data and generating meaningful emotional tags. BLIP-2 was powerful, but it still required thoughtful filtering and interpretation. Designing a calm, emotionally supportive interface also took time — I didn’t want anything to feel cold or overengineered. Another challenge was technical orchestration: connecting frontend, backend, AI pipeline, and user data in a way that feels seamless for the user. Doing this solo forced me to stretch across every layer — and learn a ton in the process.
Accomplishments that I'm proud of
I’m proud that I finished this project end-to-end, by myself, and that it actually works — not just technically, but emotionally. I made something I genuinely love using. I brought together AI, psychology, and art in a way that feels thoughtful and human. I’m especially proud of the modular design, the curated tone, and the fact that I did it all within a short timeline, without cutting corners.
What I learned
This project pushed me to combine AI and UX in a new way. I deepened my understanding of computer vision, NLP, and emotional modeling, while also improving how I think about frontend development and user-centered design. I also gained confidence in building solo — from setting up a cloud architecture to debugging frontend state issues. But most of all, I learned that tech doesn’t have to be cold. It can feel safe, soft, and poetic — and that’s what I want to keep building.
What's next for MoodMuse
I’d love to expand MoodMuse with artworks from other open-access museum collections, introduce audio-guided recommendations, and allow deeper journaling prompts. I’m also exploring ways to collaborate with real-world institutions — especially museums and mental health initiatives — to bring emotional AI into more spaces. And of course, I’m already thinking about how to make the recommendation model even smarter — by including user feedback, longer text analysis, and more nuanced emotional tagging.
Built With
- blip-2
- css
- firebase
- flask
- google-cloud-functions
- html
- image-processing
- javascript
- met-museum-api
- mongodb
- natural-language-processing
- numpy
- open-access
- pillow
- pymongo
- python
- transformers
- vertex-ai
Log in or sign up for Devpost to join the conversation.