Inspiration

This design was inspired by the fact that randomly shuffled music never matched perfectly with the moment.

What it does

Harmony turns motion into a live soundtrack. Swaying, spinning, pausing, and energetic movement each map to different musical styles and intensity. On top of the instrument layers, ElevenLabs generates wordless emotional vocals like humming, sighs, and melodic “ooh/ah” phrases that follow the vibe in real time.

How we built it

We first tried using our phones’ gyroscope and accelerometer, but getting consistent IMU data and stable streaming across devices was unreliable. We pivoted to an Arduino Uno with an MPU6050 for predictable real time sensing. The Uno reads 6 axis motion data at about 20 Hz, computes features like magnitude, stillness, and spin rate, and streams them over 115200 baud serial to a host program that drives music layers and triggers ElevenLabs vocals.

Challenges we ran into

Signal noise and drift caused jitter, so we added smoothing, thresholds, and cooldowns. We also had to keep latency low so audio changes felt instant.

Accomplishments that we're proud of

A clear gesture to sound mapping, responsive feel, and a stable hardware pipeline after the Arduino pivot.

What we learned

How to turn raw IMU signals into usable features, and how important reliability and sound design are for a good demo.

What's next for Harmony

Make it wearable, add personalization, and smooth continuous vocal expression.

Built With

Share this project:

Updates