When faced with the challenge of applying creative machine learning in the context of music, we immediately thought about how we respond to music through dancing. As music lovers and dance enthusiasts, we wanted to create an application that anyone can enjoy regardless of their musical expertise. We are ultimately trying to blur the line between music listeners and dancers by using ML.

What it does

While playing their music using the microphone input, the user busts a couple moves in front of the camera. The music gets uploaded to the backend server, where it gets split into chunks and the body movements are analyzed.

Each piece of music is then paired with a certain body-part. When the user starts dancing, snippets get rearranged or remixed based on the user’s body movements and become available to them as an MP3 download.

This is a novel application that allows for a more interactive user experience since they have control over culturally powerful and symbolic songs, which make anyone dance. However, with this tool, the user’s dance itself, gives them the ability to mix up the song and alter it however they would like to.

How we built it

We developed this application based on the already proven concept (as seen in google’s body synth) that the body can be an instrument, but we’ve taken it further to give the user the power to cut up an existing piece of music as the user dances. We used pose-net and a custom designed motion detection algorithm to analyse the user’s dancing. The machine-learning model, built with Tensorflow, was used to analyze movement and pair it with a particular piece of music. The frontend was built with HTML, JS, and various SVGs developed using js frameworks. The song that the user plays is uploaded to a backend Flask server using audiorecorder.js and http get and post requests. After it gets sent to the backend, the audio is broken into snippets using Python’s pydub library along with ffmpeg. The user’s movements along with the song they danced to gets sent to the frontend, where the movements get analyzed with the magenta library and a tune is developed. The altered audio is then available for download as an MP3 file for the user.

Challenges we ran into

Magenta was feeling last minute and challenging, but our failure to use it easily is also very valuable to this hackathon which is designed to get us to test out this software.

We are proud of producing a valuable project that has provided useful information to the art/tech world going forward but our hard work ‘testing’ Magenta has meaning!

We faced various struggles in our project, including finding a way to merge our code our together and sending the code to the backend. For some of us, it was our first time using the Flask server and a relatively high learning curve to use HTTP get and post requests. Additionally combining the snippets of audio was also a difficult challenge since the Pydub library didn’t have many straightforward features to make this possible.

Accomplishments that we're proud of

Eyve: I worked on PoseNet and movement detection. This was interesting because all of my previous work in computer vision was done in python using resnet50, so working in JS was an interesting challenge for this project. I learned a lot about coding in JS and working on the front end of a webapp for this project.

Meredith: One of the interesting things we are doing with the UI is that we are importing the graphics as SVG elements and then manipulating them as elements of the dom via jquery. This was a new technique for me as previously I either used canvas, and canvas API calls to generate graphics or libraries like p5. This was a fantastic way to iterate quickly with designers and design elements that export to SVG.

Hyma: I worked on both the frontend and backend web interfaces, taking in audio through the mic, then sending it to the backend via http get/post requests, then splitting the audio into separate chunks using Python. I also made it possible to construct tunes that can be played on the frontend based on analysis of body movements.

Vaidehi: I was the ever-ready innovator with my beginner-creative-coder eyes focused on slack so as to work with my team on getting ‘creative’ and storified with the code. I also prolifically churned out heaps of imagery and animations (even though I am a beginner with adobe!) to construct a disruptive carnivalesque aesthetic which combines the flow of music with fluid movements of a dancer with of-course bodily fluid! I am proud of the contrast between ‘machine learning’ and rabelaisian references!

What we learned

Vaidehi: During this hackathon the biggest leaps of faith were to make an SVG to be animated via posenet- which entailed a painstaking night of feeling like a beginner puppet maker on illustrator- but the end result was just as zany and creatural as I could have dreamed of! Also making a mock-up of a website in Adobe XD - making me swiftly develop skills towards being a professional web designer and become inspired with more ideas for artistic website to create in the future!

Eyve: I learned a lot about poseNet and JavaScript coding. My primary focus coming into this project was working on the motion detection on the back end, so when I also ended up working on the front end it was an exciting adventure. There are several stylistic differences between working in JS and working in python, so it was odd to go back to things like semi-colons on line ends. PoseNet was an interesting model to work with because its unlike anything I've previously worked with. I'm used to having a lot more knobs and dials to twiddle to get better results, but PoseNet is essentially prepackaged, you give it an image and it tells where the poses are. Working with this tool allowed me to focus more on the collection and processing of data.

Hyma: I learned how to use VScode, Flask, and HTTP get/post requests. It was a pretty high learning curve for me, as I didn't have much experience on sending information through the server or working with audio files. I also learned a couple JS frameworks and really enjoyed being a part of both the front and backend development.

What's next for Bodily Remixes

Given more time, we would want to develop the merging of the human body and ‘sounds of music’ - for example- what does a leg sound like, what does a hand sound like? We particularly wanted to focus on how the snippets of music get assigned to certain parts of your body and what role the user could take in selecting and assigning the snippets of music. We also hope to gamify the process of converting user movements to tunes in Magenta by encouraging the user to make more daring/unique motions that could potentially make the audio sound better.

Built With

Share this project: