Inspiration

What it does

Shapeways is a gesture-based music generator and art generator. It's interactive art that explores new ideas and technologies in music and user interface. Part multimedia sandbox, part installation art for the home, and part exploration of new forms of expression and interaction, it's many things.

How it works

Shapeways uses P5.js Pose to track your wrists and heads, and maps them to one of six segments of the screen. Various calculations are made and passed to both the visual and musical components. On the musical side, "performance seeds" are generated. These seeds are made up of notes whose pitch and durations are defined by calculations relating to your wrist and head location relative to those six segments of the screen.Those musical performance seeds are then sent in API calls to both MusicVAE and MusicRNN checkpoints. Those responses are used to create dynamic short melodic loops inspired by those seeds, which are manipulated, looped, and played over two different Tone synths, which are routed through Tone.js filters and finally to the speakers. These short loops are played until the camera detects a head or hand in a different segment of the screen. In that case, a new seed is crafted with the new values, and new melody and counterpart are created. The end result is a sonic experience that melodically responds to your movements and gestures in a very intuitive and fun way.

How we built it

We built the projects using React components to isolate concerns, efficiently handle state management and respond to changes in tracking. We used node.js on the back end, P5 to track pose information, and magenta for generating music using machine learning.

Challenges we ran into

Above all else, I would say one of the biggest challenges was figuring out how to do everything we wanted using only the camera tracking data for input. It took a lot of calculation, creative coding and experimentation to dynamically generate music seeds that produced the quality of generated music we were looking for. Once we were happy with the generated music, figuring out how to create the right amount of variety and repetition based on your movements or lack thereof was quite challenging, or at least it took a lot of trial and error.

Accomplishments that we're proud of

We had an ambitious and fairly abstract vision for the project. We did a great job of communicating and collaborating to ensure a smooth chain from camera input to audiovisual output. Getting all of the cutting-edge technologies to play nicely together took no small amount of work.

What we learned

Various members learned a lot about: machine learning, music theory, React, P5, Magenta, finding creative ways to translate data into synaesthetic.experiences, and figuring out workarounds to create fluid, responsive, dynamically generated art.

What's next for shapeways

We have many plans. We already set up functions to change keys and melodies, but didn't implement gestures to trigger those changes. We're also planning to add sensor-influenced and constantly changing low end drone noises to the sonic experience. And we'd like to have the effects chain parameters change as tracking data changes, not just the synths themselves.

Built With

Share this project:

Updates