Inspiration Creative expression shouldn’t require expensive instruments, studios, or formal training. We saw how socioeconomic barriers and mental health stigmas leave many without an outlet for their emotions. By letting anyone turn everyday sounds into music, we can democratize art, help people process feelings, and foster connection across the globe.
What it does Capture & analyze any voice or ambient sound via microphone input. Extract audio features (tempo, energy, timbre) using React-compatible open source softwares. Let users pick a mood—from “Energetic” to “Calm” Generate a custom soundtrack with Tone.js tuned to those features and mood. Play back or export a royalty-free WAV file for personal use or sharing.
How we built it Frontend/Backend in React: live audio recording UI, mood selection, playback controls--with help from Loveable (free AI software for web app creation) Audio I/O via simpleaudio and standard WAV libraries for saving and streaming tracks. Collaboration: Sabrina led data research, Ana managed process and backend, Seyeon guided musical design, and Arianna crafted the UI.
Challenges we ran into Real-time audio handling in the browser, limitations of Loveable's emotion detection algorithm, issues with customization features, indecision between using React or a Python backend (with Magenta.js, OpenSMILE, or PyAudioLibrary), coordinating a first-time hackathon team under a tight 48-hour deadline.
Accomplishments that we're proud of A working prototype—from microphone to finished track. Ability to export fully original, royalty-free music that sidesteps copyright risks. A polished, user-friendly interface. Pulling together as a cross-disciplinary team in our first hackathon and delivering a scalable MVP.
What we learned The intricacies of browser audio APIs and limitations of Python audio-processing libraries. Best practices for integrating TensorFlow-based music models into a web service. The value of rapid prototyping and clear team roles when racing the clock.
What’s next for Moodify Mobile support for on-the-go track creation. Real-time streaming so users can hear AI music as they speak. Community features: private/public track galleries and social sharing. Expanded mood library with finer emotional nuances and adaptive soundscapes. Partnerships with mental-health practitioners for guided therapy integrations.
Built With
- loveable
- meyda.js
- react
- tone.js
Log in or sign up for Devpost to join the conversation.