moodly.fm
Inspiration
Every time I want to share how I'm really feeling, I reach for Instagram Stories — then spend 30 seconds trying to remember to set Close Friends. I always forget. The song I wanted to share gets buried in the group chat three hundred messages deep by morning. And Spotify, for all its intelligence, knows what I listen to but has no idea why.
That gap bothered me. There is a $30B+ opportunity sitting at the intersection of emotional expression, private social sharing, and music — and nobody is building into it. Spotify's 600 million users generate vast behavioral data, but zero emotional context. Instagram captures emotional expression but optimizes for broadcast and performance, not intimacy. The result: no product today answers the question that actually matters — what is this person feeling right now, and who in their life is feeling the same thing?
moodly.fm is the answer.
What it does
moodly.fm is a private emotional music-sharing platform. You describe how you feel in your own words — or speak it aloud. Claude analyzes your mood, maps it to a color, and curates a playlist of 5 songs that genuinely understand you. Your circle of 30 people sees it privately, by default, not by accident.
The 30-person cap is not a constraint — it is the product's core mechanic. It creates intimacy at scale: a retention mechanic (your circle depends on you showing up), a network effect (every person you add increases your reason to post), and a data quality advantage (intimate sharing produces more honest emotional signal than broadcast). When a friend reacts to your card, you see their name and emoji — a quiet moment of human recognition that no algorithm manufactured.
Over time, every mood entry becomes a labeled data point: text input, Claude-assigned mood tag, valence score, timestamp, and behavioral outcome. The Aura page surfaces this as a monthly mood calendar, a weekly gradient blended from your mood colors, and a liked songs pool. The emotional archive you build is yours — and it gets smarter the longer you use it.
The business impact is real. Emotional context is the most valuable data layer missing from the music industry today. A user who logs "anxious, mind won't stop" before saving five songs has revealed something no streaming algorithm can infer: the precise emotional state that drove a listening decision. At scale, that signal is extraordinarily valuable — for labels understanding which emotional occasions their artists own, for wellness platforms seeking behavioral signals, for advertisers wanting emotional context over demographic proxies. The revenue model: freemium subscriptions, anonymized emotional-music data licensing to DSPs and labels, and a B2B API for mental wellness platforms.
How we built it
Claude (claude-sonnet-4-20250514) is the core engine — not a wrapper around it. The product cannot function without it. The prompt architecture is deliberate: Claude receives the user's free-text mood alongside their onboarding taste profile (genres, energy level, a reference artist) and returns a strict JSON object — a mood tag from a fixed 10-mood taxonomy, and 5 Spotify-linked songs. That mood tag then drives the entire visual system: card background color, calendar dot, aura gradient contribution. Every surface of the product speaks the same emotional language.
Voice mood logging via the Web Speech API reduces input friction to near zero — you feel something, you open the app, you speak. The onboarding quiz builds a taste fingerprint that personalizes Claude's recommendations from the very first entry. Liked and disliked signals on individual songs feed back as explicit preference labels, forming the foundation of the predictive layer.
The entire application ships as a single HTML file — no framework, no build step, no backend — deployed on Vercel. This was a deliberate architectural constraint that forced every feature to earn its place.
Challenges we ran into
The hardest technical challenge was deceptively small: apostrophes. A single can't inside a JavaScript string literal inside an HTML attribute inside a template literal silently kills the entire script block — and doLogin is not defined is the only clue. We learned to run every build through new Function() syntax validation before any deployment.
The harder design challenge was the card interaction model. Own cards, friend cards, and wall cards look visually identical — same color system, same playlist layout — but carry completely different interaction grammars. Your own card has recommendation feedback (like/dislike per song and per playlist). A friend's card has react and listen together. A wall card has only react. Building that distinction cleanly without fragmenting the UI required multiple full redesign passes.
Making the responsive layout feel intentional — a sticky input zone and bottom tabs on mobile, a persistent sidebar on desktop, both sharing the same codebase — in a zero-framework single-file architecture was also genuinely difficult.
Accomplishments that we're proud of
We are most proud of the fact that Claude is irreplaceable in this product — not decorative. Remove the Claude integration and the core value proposition disappears entirely. The mood-to-color-to-playlist pipeline, the taste fingerprint personalization, and the structured emotional data model are all Claude-native. That is what the "Best Use of Claude" criterion asks for, and we built it that way from the start.
We are also proud of the 10-mood color taxonomy — melancholic is indigo, heartbroken is rose, calm is sage, focused is steel blue — which gives the product a visual identity that is immediately legible and emotionally resonant. The aura gradient, which blends your mood colors proportionally over time, is genuinely beautiful and genuinely meaningful at the same time.
And we are proud of shipping a complete, polished, production-deployed product in a single hackathon window — with auth, onboarding, a living feed, a calendar, a community wall, friend reactions, listen together sessions, and a save-as-image card export — all in one file.
What we learned
The most important product insight: the emotional contract between a product and its user is more defensible than any feature set. moodly.fm's contract — private by default, small circle by design, your emotional history belongs to you — cannot be replicated by Spotify or Instagram without breaking their existing business models. That structural immunity is what makes moodly.fm interesting as a company, not just as an app.
On the technical side: Claude performs dramatically better as a music curator when given structured output constraints and personal context simultaneously. The jump from generic recommendations to genuinely resonant ones comes from the taste profile injection, not from the mood text alone. The model needs to know who you are before it can understand what you feel. That insight generalizes to any Claude-powered personalization product.
What's next for moodly.fm
The immediate next step is the predictive layer — the genuine Track 3 capability already embedded in the architecture. Every mood entry is a labeled training point. After 30 days, the product can begin to predict: based on your last three Tuesdays, here is what you might be feeling before you even open the app. Proactive playlists, delivered before you ask.
Beyond that: Spotify OAuth for true playlist saving and 30-second previews. Real-time circle updates so you see your friends' cards appear live. Mutual resonance notifications — a quiet nudge when someone in your circle is feeling something close to what you felt yesterday. A mood sharing card exportable as a styled image, designed to be posted anywhere without over-explaining.
The longer arc is the data business: an anonymized emotional-music dataset, a licensing model for DSPs and labels, and a B2B API that lets mental wellness platforms embed Claude-powered mood-to-music mapping in their own products. The consumer app is the data flywheel. The data flywheel is the business.
moodly.fm is not just an app. It is the first piece of infrastructure for emotional music intelligence — and it starts with making people feel genuinely understood.
Team Member
My Nguyen Thao Phan
Built With
- anthropic-messages-api
- claude-api-(claude-sonnet-4-20250514)
- css
- google-fonts
- html
- html2canvas
- javascript
- web-speech-api
Log in or sign up for Devpost to join the conversation.