Important Notes
- Before opening the app, make sure you are connected to the internet.
- Currently, a quark takes about 1-2 minutes to fully generate your soundtrack — stay patient :)
- You can pinch with both hands and stretch to make a quark bigger or smaller to accommodate your physical space
- You can also use your hands to clap near a quark to toggle pause
- The more colorful your space is — the more colorful your quark will be!
- Have fun!
Inspiration
We were fascinated by the idea that every room carries a mood — shaped by lighting, space, texture, and movement. Spatial computing already blends digital and physical worlds, so we wondered: what if your environment could compose its own soundtrack?
What it does
Quark turns your surroundings into a generative soundscape. Walk into a café and hear warm lo-fi tones, step into your kitchen and get airy ambience, or relax in your bedroom with calm pads. The app reads the mood of your environment and composes music and visuals that match it.
How we built it
- Passthrough Camera API + OpenAI Vision to interpret room mood and extract semantic cues.
- Whisper AI for voice transcription
- Audio-visual contextual analysis (K-means + FFT) maps environmental features to color palettes and dynamic parameters that modulate particles and sound
- Suno AI for generative music creation, seeded by the environment’s extracted mood vector
- OpenAI Speech API for user guidance
- Meta XR Audio & Interaction
Challenges we ran into
- Latency constraints of generative audio
- Balancing the simplicity of a clean and minimal UX with a system with complexity and many moving parts
- Tuning the particle system so that it feels alive without overwhelm
- Going through several iterations of CPU/GPU optimization
What's next for just vibes
Stay tuned :)
Built With
- openai
- suno
- unity
- whisper


Log in or sign up for Devpost to join the conversation.