Have you ever been to someone's house and simply not been a fan of the music there? Also, have they had LED strips that didn't even change to the beat??

What it does

We use a raspberry pi that we trained to tell our faces apart and select a party playlist from our actual Spotify. The Raspberry Pi is connected wirelessly to a speaker and plays a mix of the people's favorites who are actually in the room. A visualization is then created using a mic+ Raspberry Pi + led strip to really feel the music.

How we built it

On the Raspberry Pi side, we first connect the Pi to a camera module then we install the OpenCV library to the Pi. We upload some image of ourself to train a facial recognition model. Then we use this model to recognize our faces. On the Spotify side, we learned how to use Spotify API to search up users, search up their playlist, and play a soundtrack from a web browser version of Spotify on the raspberry pi.

Challenges we ran into

The OpenCV takes a long time to install and we could not get the LED to play the soundtrack to the beat of the music.

Accomplishments that we're proud of

We are excited that we could control Spotify with our faces.

What we learned

We learned how to set up OpenCV and how to work with Spotify API.

What's next for AudioPartyTogether

We look forward to getting the LED strip working to the music beat. We also look forward to introducing a personal version where openCv will recognize your mood base on analyzing your face then play the appropriate playlist.

Built With

Share this project: