Inspiration
I dance house and hip-hop. When I look for music, I don't think in genres or keywords, I think in feelings. "The taste of childhood." "Alone in space after finishing a sci-fi novel." "That feeling right before it rains."
No search bar handles that. And it's not just me — everyone has had the experience of knowing exactly what they want to feel, but having no way to tell Spotify. You scroll through playlists with vague names like "Chill Vibes" and hope something clicks. You ask friends. You give up and replay the same 50 songs.
The gap is real: humans experience music emotionally, but every music tool speaks in keywords. Millions of perfect songs never get found because there's no way to search by feeling.
I wanted to build something that actually understands what I mean.
What it does
Attune is an AI agent that translates abstract human feelings into real song recommendations.
You describe what you feel — as vague or poetic or personal as you want — and Attune:
- Decomposes your input into structured emotional and musical dimensions. "The taste of childhood" becomes: nostalgia, warmth, innocence, gentle tempo, acoustic warmth, major key simplicity.
- Asks you to clarify when it matters. "Childhood" could mean music FROM your childhood or music that FEELS like childhood — Attune asks instead of guessing, because your feelings are yours to define.
- Searches from multiple angles — not one keyword query, but 5-7 parallel strategies across Spotify's catalog and community discussions where real people describe how songs make them feel.
- Explains why each song matches. Not just "here's a playlist" — but "this song matches because the sparse production and echoing vocals create the same sense of vast solitude you described."
- Refines based on your feedback. "More melancholy, less playful" — and it adjusts the emotional dimensions and searches again.
The thinking process is visible the entire time. You see how your feeling got translated into music. That transparency is the product — it builds trust and actually teaches you about your own taste.
How we built it
Attune uses Claude as a multi-hop reasoning agent with Spotify search and web search as grounded tools.
The core technical problem is translating abstract concepts into effective search queries. A single keyword translation fails — "taste of childhood" doesn't map to any useful Spotify search term. Our solution: structured decomposition + parallel multi-angle search.
The agent follows a clear reasoning chain:
- Hop 1 — Decompose: Claude breaks the abstract input into emotional dimensions (nostalgia, intensity, texture) and an internal musical reasoning scaffold (approximate valence, energy, tempo, acousticness). This scaffold is borrowed from Spotify's audio feature framework — we use it as a thinking structure, not an API call.
- Hop 2 — Search: Claude generates diverse search strategies from different angles (mood keywords, reference artists, thematic content, sonic texture, community discussions) and executes them in parallel through tool use.
- Hop 3 — Rank and explain: Claude merges all results, deduplicates, ranks against the original feeling, and writes a specific explanation for each recommendation.
- Hop 4 — Refine: The user adjusts, and Claude shifts the emotional dimensions while remembering the original context.
We built this with Claude's tool-use capability — Claude reasons through each step, decides which tools to call, evaluates the results, and explains its logic. The Streamlit UI renders each hop visibly so the user can follow the agent's thinking in real time.
An important constraint shaped our design: Spotify deprecated their audio features and recommendations APIs in November 2024. We couldn't use parametric searches like target_valence=0.6. This forced us to build something more interesting — Claude reasons about music using the same dimensional framework internally, but grounds its recommendations through keyword search and community data. The result is actually richer than parametric matching, because Claude can reason about WHY a song feels a certain way, not just match numbers.
Challenges we ran into
Spotify API deprecation was the biggest surprise. We planned to use audio feature parameters to find songs by valence, energy, and tempo. When we discovered those endpoints return 403 for new apps, we had to rethink the entire search strategy. We turned the constraint into a strength — Claude's reasoning about music is more nuanced than any parameter filter.
Song hallucination is a real risk. LLMs confidently recommend songs that don't exist. We address this by validating every recommendation against Spotify's search API before presenting it. If Claude suggests a song that can't be found, it gets filtered out.
The single-query trap. Our first approach — translate feeling into one search query — produced generic results. The breakthrough was parallel multi-angle search: the same feeling produces 5-7 completely different search strategies, each finding different songs. The diversity of search angles matters more than the precision of any single query.
Balancing abstraction with action. Too much decomposition and the agent over-thinks. Too little and you get surface-level results. We iterated on the system prompt to find the right depth — enough structure to reason well, enough directness to actually find songs.
Accomplishments that we're proud of
The clarification moment. When a user types "the taste of childhood" and the agent asks "do you mean music from your childhood, or music that feels like childhood?" — that's the moment where the system earns trust. It doesn't assume. It respects that your feelings belong to you.
The visible thinking. Most AI tools hide their reasoning. We show it. Users can see exactly how "Project Hail Mary, alone in space" gets decomposed into emotional dimensions and translated into search strategies. Multiple people during testing said the thinking process was more interesting than the results themselves.
Making the deprecation work for us. Losing Spotify's recommendation API could have killed the project. Instead, it pushed us to build a more transparent and more capable system — one that reasons about music rather than just filtering by numbers.
It actually finds good songs. Not just obvious ones. The multi-angle search turns up tracks that a single keyword search would never surface — songs from community discussions where someone said "this song feels exactly like being alone in a vast space."
What we learned
LLMs are better at feeling-to-music translation than we expected — but only when you give them structure. A bare prompt produces generic results. A structured decomposition scaffold (decompose → map to musical dimensions → generate diverse searches → rank against original) produces surprisingly specific and resonant recommendations.
Community data is more valuable than metadata. The most accurate emotional matches came from searching Reddit and music forums where real people describe how songs make them feel — not from genre tags or algorithmic features. Human emotional associations are the best training data for emotional search.
Transparency is a feature, not a debugging tool. We initially showed the thinking process for development purposes. Users loved it. Seeing HOW the AI translates their feeling builds trust and teaches them about music — it turns recommendation into a collaborative conversation.
Abstract input is an underserved problem. Everyone we talked to immediately related. "I've been trying to find music for this feeling for years." The problem is universal and unsolved. Current tools are optimized for people who already know what they want. Nobody is building for people who only know how they feel.
What's next for Attune
- Taste memory across sessions — Attune learns your personal emotional vocabulary over time. "Warm" means something different to everyone.
- Multi-platform support — Apple Music, YouTube Music, Tidal. Music feelings aren't platform-specific.
- Social sharing — share not just the playlist but the feeling: "here's the soundtrack to 'the taste of childhood' as I experience it."
- Deeper community integration — index more sources of human emotional associations with music (forums, reviews, social posts) to build a richer feeling-to-song mapping.
- "Surprise me" mode — Attune generates a random emotional prompt and finds music for it. Discover songs for feelings you haven't had yet.
Log in or sign up for Devpost to join the conversation.