AutoPlay Primary Video -
AutoPlay Gesture Control Feature Video -


There’s nothing worse than being at a party where the music sucks, and it’s not much fun to scream song requests to a DJ listening to you half-heartedly. Frankly, this situation happens way too often at almost every event, and no easy way exists that automatically factors in people’s most liked and listened to songs in real-time based on the current atmosphere of the event.

What it does

AutoPlay is an automatic music playing and audio visualization system that intelligently gauges the current event atmosphere using computer vision and motion tracking technology, determines the most suitable genre of music, and plays songs from that genre that are most mutually liked by people within a set radius, in other words, the audience of the event.

There are three components of the system:

1. User mobile interface (React Native app)

The user mobile interface is a react native app for audience members. Once they create an account with their Spotify user ID, we track the user's GPS data every 1 min. If they enter within the radius of an event, their songs from spotify will be factored into the song selection and playing system in real-time.

2. Manager touchscreen interface (PyQt app)

The manager interface is a touchscreen app built in python that is displayed on the raspberry pi screen. After an event manager creates an account with their Spotify user ID, they can initiate the song selection and playing system by selecting the language of the songs they wish to have for the party, and the genre of the songs, for which they can either choose “AI”, in which case our artificial intelligence will intelligently gauge party atmosphere and choose the most suitable genre, or they can select a custom genre of their choice which they want all songs to be. Managers have the ability to change the language and genre at any time, and the changes will be reflected in real-time.

After starting the process, our app will collect the most liked and listened to songs from users' Spotify accounts who are within a certain radius of the event, which is determined through users’ real time GPS data. It will then sort the songs based on genre, language, and how many people in the audience listen to the song on Spotify, to maximize mutual likeness. The system will then queue songs on an individual basis, in order to accommodate for changes to the system by the manager and the artificial intelligence model in real-time, as well as only factoring in songs from people currently present at the party, not people who have already left.

3. Hardware (Camera, Raspberry Pi, Speaker)

The hardware setup consists of a camera/webcam, in order to implement our camera vision model to determine suitable genres, a Raspberry Pi, which runs our camera vision model and python app on the touchscreen device, and the speaker which simply plays and pauses songs based on instructions from the Raspberry Pi.

How I built it

  • React native to build the user mobile app interface
  • PyQt for the manager interface
  • Raspberry Pi to run the PyQt app and camera vision models
  • Bluetooth speaker for playing audio
  • Webcam to retrieve live video stream
  • LED lights for audio visualization
  • MongoDB for database
  • Flask for back-end functions
  • Google Cloud to host back-end end-points
  • Spotify API to retrieve, queue, and play songs
  • OpenCV and Python for motion tracking computer vision model to determine genre in real-time

Challenges I ran into

  • Integrating the computer vision model with the Raspberry Pi
  • Getting seamless and high-quality audio to sound from the speaker through the Raspberry Pi
  • Using a multithreading/parallel processing approach in order to queue songs on Spotify on an individual basis
  • Hosting backend functions on Google Cloud
  • Handling user authentication and credentials authorization using the Spotify API
  • Retrieving the user’s GPS data every 1 minute and updating the database accordingly
  • Integrating our computer vision model and Spotify song selection algorithm with the PyQt app

Accomplishments that I'm proud of

  • Proud to have successfully integrated all the components into one coherent system
  • Successfully hosted backend connected to MongoDB on Google Cloud, and successfully able to send POST and GET requests to the hosted endpoints from the react native app
  • Proud to have learned to integrate multithreading and parallel processing in Python
  • Proud to have integrated our song selection algorithm and computer vision algorithm with our PyQt app
  • Proud to have gotten audio working seamless and with high quality from the Raspberry Pi and PyQt app to the speaker

What I learned

  • How to effectively integrate a computer vision model with a Raspberry Pi
  • How to successfully integrate and display a PyQt python app with a Raspberry Pi touchscreen
  • How to host backend endpoints and functions on Google Cloud
  • How to retrieve users GPS data every 1 minute in react native, as well as how to make constant requests to the backend without overloading the application or the database
  • How to implement multithreading and parallel processing in python in order to run several queueing song processes simultaneously

What's next for AutoPlay

  • Make system compatible for other music platforms such as Youtube Music, Gaana, Pandora, Google Play, Apple Music, and SoundCloud
  • Implement a voting feature where audience members can request and vote on specific songs
Share this project: