Inspiration

We wanted to blur the line between listener and creator. Music tools often stop at beats or stems—Muse Master was inspired by the idea of directing a full AI artist, complete with voice, story, style, and vision.

What it does

Muse Master lets users create original songs end-to-end: define a vocal persona, choose genres and moods, write a creative prompt, and generate vocals, lyrics, composition, and album art—all in one flow.

How we built it

We built a React + TypeScript app powered by Google Gemini 3 Pro preview. Gemini handles creative planning, vocal generation via TTS, and cover art creation. Audio is processed client-side, with state managed via React hooks and persistence through localStorage.

Challenges we ran into

Coordinating multi-step AI generation reliably was tricky, especially handling partial failures. Audio processing and memory management in the browser also required careful handling, as did keeping a large monolithic component maintainable.

Accomplishments that we're proud of

  • Fully AI-generated songs with vocals, lyrics, and artwork
  • Persona-driven vocal performances that feel intentional
  • A complete creative pipeline with no backend
  • A polished, Spotify-inspired listening experience

What we learned

We learned how powerful structured prompting can be, how to manage complex async workflows in the frontend, and where client-side AI apps hit real scalability and security limits.

What's next for Muse Master

We plan to refactor into smaller components, add routing and better state management, introduce error handling and loading states, and explore a backend for secure API usage, sharing, and collaboration.

Built With

Share this project:

Updates