About MoodMuse 🌌

Inspiration

"Sometimes words aren't enough." This project was born from those 3 AM moments when you're drowning in feelings that language can't capture. You know that ache of trying to explain your emotional state to someone and watching them nod politely while completely missing the depth of what you're experiencing? That's the problem I wanted to solve.

I've always believed that emotions are synesthetic - they have colors, textures, sounds, and visual stories. When I'm feeling nostalgic, it's not just sadness; it's golden hour light filtering through childhood windows, it's the specific melancholy of Bon Iver, it's the visual poetry of memories that were never quite real.

The philosophical question driving this project: What happens when AI tries to understand the human heart? Can artificial intelligence capture not just what we feel, but the intricate stories and contexts that create those feelings?

MoodMuse is my attempt to bridge that gap between the complexity of human emotion and our limited ability to express it in words - a tool that doesn't optimize for efficiency but for vibes.

What it does

MoodMuse transforms your emotions into personalized Spotify playlists and dreamy AI-generated art, creating beautiful shareable moodboards that capture the depth of human feelings through music and visual storytelling.

The app honors emotions rather than just processing them - taking messy, complex, beautiful human feelings and reflecting them back in a way that says "yes, this matters, and you're not alone in feeling this."

How I built it

The Three-Phase Emotional Journey:

$$\text{Emotion Input} \rightarrow \text{AI Analysis} \rightarrow \text{Multi-modal Output}$$

  1. Express: Users describe their mood in natural language - as poetic, raw, or specific as they'd like
  2. Transform: AI analyzes the emotional context and creates:
    • Poetic interpretations that feel like your inner voice
    • Curated Spotify playlists with real tracks and 30-second previews
    • Four distinct AI-generated images in Pinterest-style aesthetics (lifestyle, nature, interior, fashion)
  3. Share: Generate unique shareable URLs with aesthetic names like moodmuse.app/board/dreamy-melody-42

Tech Stack

  • Frontend: Next.js 14 with TypeScript for type safety and modern React patterns
  • Styling: Tailwind CSS with custom animations and glassmorphism design
  • AI Integration: OpenAI GPT-4 for emotional analysis and DALL-E 3 for contextual image generation
  • Music: Spotify Web API for real track search, metadata, and preview URLs
  • Database: Supabase for storing shareable moodboards with Row Level Security
  • Deployment: Vercel with comprehensive environment variable management

Architecture Decisions

The app follows a service layer pattern with three core integrations:

// Service orchestration for multi-modal emotional output
const moodboardData = await Promise.all([
  openaiService.analyzeMood(userInput),    // GPT-4 emotional analysis
  spotifyService.findRealTracks(suggestions), // Music curation
  openaiService.generateImages(visualPrompt)   // DALL-E 3 imagery
]);

I chose client-side processing to keep the experience immediate and personal, implementing robust error handling and graceful fallbacks for when APIs fail. The responsive design follows mobile-first principles with progressive enhancement.

Challenges I ran into

1. The Context Problem

Challenge: Initial AI-generated images were generic (angry = red storms, sad = blue abstracts)

Solution: Completely rewrote prompt engineering to focus on narrative context rather than abstract emotions:

// Before: "Create angry imagery"
// After: "Two people at dinner table, emotional distance palpable, 
//        one gesturing while other looks away, warm lighting 
//        contrasting with cold emotional space"

Accomplishments that I am proud of

Technical Excellence

  • Production-Ready Code: Comprehensive error handling, fallback systems, and TypeScript safety
  • API Orchestration: Successfully managing OpenAI, Spotify, and Supabase APIs with graceful degradation
  • Responsive Perfection: Seamless experience across mobile, tablet, and desktop

Emotional Impact

What I'm most proud of isn't the technical implementation - it's that MoodMuse actually makes people feel something. During testing, people would share their moodboards and say "this captures exactly how I'm feeling."

Design Philosophy

  • Human-First Technology: Built something that prioritizes authentic human expression over algorithmic optimization
  • Synesthetic Experience: Successfully translated the concept that emotions have colors, sounds, and visual stories into working software
  • Viral-Ready Aesthetic: Created shareable content that people genuinely want to show others

What I learned

  • Prompt Engineering is an Art Form: The difference between "angry imagery" and narrative context taught me that AI prompting requires deep empathy and storytelling skills
  • API Resilience: Managing multiple APIs taught me about graceful degradation - when one service fails, the others should carry the experience
  • AI & Emotion: Working with GPT-4 on emotional interpretation revealed both the profound potential and fascinating limitations of AI understanding human feelings
  • Design as Empathy: Every design decision was really about empathy - how do you make someone feel understood through pixels?
  • Technology for Connection: Building something that prioritizes human connection over metrics is actually harder than building for efficiency

What's next for Mood Muse

  • Mood Archaeology: Track emotional patterns over time and help users understand their emotional journeys
  • Collaborative Moodboards: Multiple people contributing to shared emotional experiences for couples, friend groups, or therapy sessions
  • Voice Mood Analysis: Capture emotional nuance from vocal tone and inflection
  • Emotion Detection from Photos: Analyze facial expressions and body language to suggest mood starting points
  • Collaborative AI: Let users teach the AI about their specific emotional vocabulary and metaphors
  • Multi-Model Image Pipeline: Integrate Midjourney, Stable Diffusion XL, and Adobe Firefly alongside DALL-E 3 to create diverse artistic styles
  • Style-Specific Model Routing: Automatically select the best model based on emotional context - DALL-E for lifestyle photography, Midjourney for abstract emotional landscapes, Stable Diffusion for surreal artistic interpretations
  • Real-Time Model Comparison: Generate the same prompt across multiple models and let users choose their preferred aesthetic, building a personalized style profile over time
  • Spotify Listening History Integration: Request user permission to analyze their actual listening patterns instead of relying on generic mood categories:
  • Audio Feature Matching: Use Spotify's audio analysis API to match emotions to specific musical characteristics: $$\text{Mood Vector} = f(\text{valence}, \text{energy}, \text{danceability}, \text{acousticness})

Submission Track:

Track 3: The Creative Strand - Collaborate with the Machine

Built With

Share this project:

Updates