Musical Oracle - About the Project
π΅ The Vision: Making AI Fun, Not Fearsome
This project was born from a simple belief: AI should be magical, not intimidating. People are rightfully concerned about AI being used as an excuse to replace human workers and creativity. But that's not what AI is truly about - AI should be an extension of human capability that allows us to connect to the world in new ways and experience previously impossible creative experiences. We wanted to show AI's true potential as a collaborative partner in making music, amplifying human creativity rather than replacing it.
π¨βπ©βπ§βπ¦ The Team Story
What started as a family creative experiment became an ambitious journey into the future of human-AI interaction:
- Me (Zak Hussain): Led the vision and built the core Musical Oracle application, diving deep into context engineering and discovering both the strengths and limitations of Bolt.new
- Niko: Created a stunning sound visualizer that showcased how audio can become visual art (though we ran out of time to integrate it with the main app)
- Nick: Tackled the infrastructure challenges, researching Supabase for data persistence and Revenue Cat for potential monetization
- Liam Hussain (my 12-year-old brother, under guardian Fizal Hussain): Composed 6 original ambient tracks that greet users when they first enter the Musical Oracle experience
π What Inspired Us
The Problem
Most AI interfaces feel cold and transactional. Music creation software is complex and intimidating. We saw an opportunity to bridge these worlds - creating an interface where your voice becomes the instrument and AI becomes your creative partner.
The Dream
Imagine talking to a floating, breathing sphere that understands your musical ideas and transforms them into real compositions. No complex software to learn, no musical training required - just your voice, your creativity, and an AI companion named Firefly.
π οΈ How We Built It
The Creative Journey Through Multiple Applications
We didn't start with the Musical Oracle. We built multiple applications in Bolt.new, each one teaching us something new about immersive web experiences:
- Early prototypes explored different interaction paradigms
- Interface experiments tested various ways to make AI feel approachable
- Audio experiments pushed the boundaries of what's possible in web browsers
The Technical Architecture
Frontend Magic (React + TypeScript)
- 3D Floating Interface: Used Framer Motion and CSS transforms to create Firefly, a breathing sphere that serves as your AI companion
- Spatial UI Design: Elements flow smoothly across the screen, making the interface feel alive and responsive
- Real-time Audio Processing: Web Audio API captures and analyzes voice input with visual feedback
AI Brain (CrewAI + FastAPI)
- Specialized AI Crews: Four distinct AI agents working together:
- Rhythm Crew: Analyzes beatboxing and percussive elements
- Harmony Crew: Interprets chord progressions and harmonic content
- Melody Crew: Processes melodic lines and musical phrases
- Composer Crew: Combines everything into cohesive compositions
- Emotion-to-Sound Mapping: Built a comprehensive database linking 70+ emotions to 500+ real instruments from the Dirt-Samples library
Audio Engine (Strudel.js + Dirt-Samples)
- Browser-based Music Generation: Compositions play live without external dependencies
- Professional Sound Library: Access to hundreds of high-quality samples and synthesizers
- Code-to-Music Translation: AI-generated patterns become playable music in real-time
The Data Science Behind the Magic
We created an extensive emotion-to-instrument mapping system:
- 70+ emotional categories (from "energetic excitement" to "melancholic introspection")
- 500+ instrument samples from the professional Dirt-Samples library
- Smart selection algorithms that match human emotional expression to appropriate sounds
π― What We Learned
Technical Discoveries
- Bolt.new's Strengths: Incredible for rapid prototyping and UI development
- Bolt.new's Limitations: Complex state management and audio processing require careful architecture
- Context Engineering: Learned to craft precise prompts that guide AI development effectively
- Web Audio API: Pushed browser capabilities to their limits with real-time audio processing
Creative Insights
- Spatial UI Design: How positioning and movement can make interfaces feel alive
- AI Personality: Giving AI systems warmth and character through voice and visual design
- Progressive Disclosure: Guiding users through complex interactions step-by-step
The Power of Constraints
Having only the last week for full development actually pushed us to build better. Constraints forced us to:
- Focus on core functionality over feature bloat
- Create clean, intuitive user flows
- Prioritize the most impactful elements
πͺ Challenges We Overcame
Technical Hurdles
- Real-time Audio Processing: Making voice recording work smoothly across different browsers and devices
- AI Integration: Connecting multiple AI crews to work together seamlessly
- Performance Optimization: Ensuring smooth animations while processing audio and generating music
- Cross-browser Compatibility: Web Audio API behaves differently across platforms
Creative Challenges
- Making AI Approachable: Designing Firefly to feel like a friendly companion, not a cold interface
- Balancing Complexity: Hiding sophisticated AI systems behind simple, magical interactions
- Audio-Visual Synchronization: Making the visual experience match the audio processing in real-time
Time Constraints
- Integration Challenges: We created amazing individual components but ran out of time to integrate everything (like Niko's visualizer)
- Feature Prioritization: Had to choose between polish and additional features
- Testing Limitations: Limited time for extensive user testing and refinement
π The Impact
What We Created
The Musical Oracle represents a new paradigm for human-AI interaction:
- Accessible Music Creation: Anyone can create music using just their voice
- Emotional AI: Technology that responds to human emotion and creativity
- Immersive Web Experiences: Pushing the boundaries of what web applications can feel like
What We Proved
- AI can be approachable when designed with empathy and creativity
- Complex technology can feel simple through thoughtful interface design
- Collaboration amplifies creativity - human voice + AI intelligence = musical magic
π Looking Forward
This project is just the beginning. We've laid the foundation for:
- More sophisticated emotion recognition in AI systems
- Advanced audio-visual integration (connecting with Niko's visualizer work)
- Expanded musical capabilities using Nick's infrastructure research
- Community features where people can share and remix their AI-generated compositions
π The Philosophy
At its heart, the Musical Oracle embodies our belief that technology should amplify human creativity, not replace it. Every design decision was made to ensure that users feel empowered, not intimidated. The AI doesn't compose music for you - it composes music with you, using your voice as the starting point for something beautiful.
We built more than an application; we built a bridge between human creativity and artificial intelligence, showing that the future of AI isn't about replacement, but about collaboration.
The Musical Oracle: Where your voice becomes music, and AI becomes your creative partner.
Built With
- 11labs
- amazon-web-services
- bolt
- bolt.new
- crewai
- ec2
- elevenlabs
- fastapi
- linux
- python
- react
- strudel
- typescript
- ubuntu
- uv

Log in or sign up for Devpost to join the conversation.