Inspiration The idea behind MoodTunes AI came from a simple question:

“What if music could match not just what we say, but what we feel—and even what we see?”

We wanted to create a fun and intuitive way for people to discover music that resonates with their current vibe, using the power of AI and natural language/image understanding.

What it does MoodTunes AI is a full-stack web app that lets users:

✍️ Type how they're feeling and receive a mood-based Spotify playlist.

🖼️ Paste an image URL (like a chill photo or a party scene), and get a themed playlist based on visual tags detected.

🎧 The backend uses Azure AI services to analyze both text (with Text Analytics) and images (with Computer Vision).

🎵 The most relevant and popular Spotify playlist is fetched via the Spotify Web API.

How we built it

Challenges we ran into Interpreting Azure’s opinion mining and tag extraction effectively for playlist generation.

Mapping generic tags (like “indoor” or “group”) to music genres in a meaningful way.

Making sure the frontend stayed responsive and easy to use while juggling two different AI pathways.

Handling fallback cases where the AI could not detect a strong sentiment or visual clue.

Accomplishments that we're proud of

What we learned How to integrate Azure AI services for multi-modal input (text and images).

How to work with the Spotify API, including token handling and playlist queries.

How to structure and connect a React + Flask full-stack app.

How to dynamically map visual tags to music themes.

What's next for MoodTunes AI

Built With

Share this project:

Updates