RhythmWear
Inspiration
Digital music creation is powerful, but it often feels disconnected from physical expression. Traditional controllers and keyboards require learned interaction patterns that interrupt creative flow. RhythmWear was inspired by the idea that music should feel as natural as movement, where gestures, not buttons, become the instrument.
What it does
RhythmWear is a wearable music interface that transforms hand gestures into sound in real time. Flex sensors detect finger bends to trigger notes or samples, while motion data controls expressive parameters like pitch and dynamics. Instead of pressing keys, users shape audio through natural movement, creating a more intuitive and physical performance experience.
How we built it
The hardware uses an ESP32 paired with flex sensors and motion sensing. The ESP32 streams sensor data wirelessly over WebSockets to a Node.js server running on a Raspberry Pi.
The software stack includes:
- ESP32 Firmware (Arduino / C++) for sensor acquisition and filtering
- Node.js + Express + ws for real-time communication
- React + TypeScript frontend for UI and calibration
- Web Audio API for low-latency sound playback
A lightweight messaging protocol was designed to transmit flex states, button events, calibration data, and raw telemetry with minimal latency.
Challenges we ran into
Sensor noise and instability became the primary technical challenge. Flex sensors naturally produce jitter and inconsistent readings, which caused unintended triggers and double-fires.
Real-time interaction also exposed timing and state-management issues across hardware, networking, and browser audio constraints. Handling WebSocket reconnection, smoothing noisy signals, and working within browser autoplay policies required multiple iterations.
Accomplishments that we're proud of
We built a fully functional end-to-end system connecting wearable hardware to a live web-based audio engine. The glove streams sensor data wirelessly, triggers sounds with low latency, and supports calibration workflows.
Most importantly, the project demonstrates that natural gestures can serve as a practical and expressive music interface.
What we learned
We learned that interactive hardware projects are largely systems-engineering problems. Reliable behavior depends less on raw sensor readings and more on filtering, hysteresis, state logic, and communication design.
We also gained hands-on experience with real-time networking, WebSocket architectures, and browser-based audio systems.
What's next for RhythmWear
Future work focuses on improving signal stability, refining gesture detection, and expanding expressive controls. Planned improvements include adaptive calibration, per-user sensitivity profiles, and richer sound design.
Longer term, RhythmWear can evolve into a customizable platform where users upload their own sounds and mappings, turning wearable motion into a flexible performance medium.
Built With
- c++
- express.js
- mongodb
- node.js
- react
- typescript
- vite
- websocket
Log in or sign up for Devpost to join the conversation.