Inspiration
When picking our Unihack playlist, our team really connected over the struggles of picking good music for parties - we all have different music tastes so there always is some compromise.
Someone raised the idea of mood detection and we got whisked down a rabbit-hole of tracking moods for all sorts of events - how could better information about an audience help you design better entertainment?
What it does
Our project, DJ Pal, gives you an easy way to manage your music while you’re busy having fun. The concept is simple: DJ Pal uses facial recognition to survey the general mood and atmosphere of the room. It then dynamically adjusts the music - either to match the vibes or try and fix them.
DJ Pal includes a camera, microcomputer, 3D printed casing, a React App, and software pipeline for the microcomputer.
We’ve called the brains of our project ‘moodSense’. This is the software pipeline that converts video feed of a crowd to a live average of the room’s mood and intensity.
Using the mood, DJ Pal chooses the genre of music to add to the queue. Using the intensity, DJ Pal selects the bpm of queued songs.
The free phone app (built using React) allows you to see what’s in the playlist, skip to the next song, and set whether you want DJ Pal to match the mood or try to mix it up by playing music opposite to the vibe in the room.
How we built it
Using an OakD Lite camera with the OpenCV computer vision package and the mobilefacenet data set, we developed code for emotion-facial recognition. This moodSense technology pipelines the mood-information of the room to Spotify that then plays a song based on the data received.
3rd Party APIs and packages we used: Spotify Web Playback SDK: https://developer.spotify.com/documentation/web-playback-sdk/reference/ MUI Core: https://mui.com/core/ React JS: https://reactjs.org/ FER - Face Emotion Recognition: https://pypi.org/project/fer/ Modified the Humanoid Project’s facial recognition scripts to add to our project: https://github.com/Abi-Humanoid/Face_ID/blob/main/README.md
Privacy was really important during this project as facial recognition is seeing increasing use for surveillance and privacy invasion. To adhere to data laws and maintain the privacy of our users, we avoided uploading face data to any cloud platforms - the final version of DJ Pal aims to do all face recognition on-device. We specifically designed our ‘moodSense’ stack to support cheaper micro-computers by running the Machine Learning mood-recognition model less frequently.
Challenges we ran into
Hardware : 3D printing issues + time crunch Software : running into typical errors in our code and requiring time to debug; Connectivity: loss of internet connection from our lead dev that is required to pipeline our moodSense data to Spotify Privacy: Issues with privacy given that we are recording public peoples face
Accomplishments that we're proud of
We learnt to use facial recognition packages using the hackathon.
We worked remotely with one member of the team - the struggles of remote and in-person communication were still a challenge to overcome despite the practice we’ve had over the last 2 years!
We threw a hell of a party while competing in this hackathon thanks to DJ PAL 😎
What we learned
We learned successful processes for brainstorming ideas and finding an adequate solution-problem fit. Our team was all relatively busy this weekend and so we learned to successfully balance our commitments outside of this hackathon with our hackathon responsibilities, and to delegate tasks effectively and appropriately.
What's next for DJ PAL
DJ PAL plans to revolutionize the music industry. DJ PAL will be distributed across all clubs and households in Australia. Long gone are the days when you and your mates spend valuable time deciding what to queue in your playlists and HELLO to a new era of a fun, easy, and decisive music 🎵🎤
For larger events, we plan on creating a larger, more powerful device. We think that the best events care intimately about the people who attend them - tracking the emotions of crowds, tracking the density of crowds for COVID safety, and tracking diversity at your events all will provide great value. Particularly given how COVID-19 has hit the events industry(https://www.victorianchamber.com.au/news/inquiry-into-impact-of-covid-19-on-events-and-tourism), we’d like to support events across the world.
To achieve this, we’d like to track audience demographics but this requires careful attention to privacy. We plan on implementing a consenting and non-consenting mode - when people haven’t consented to tracking, they need to be anonymised to ensure privacy.
We’d also like to spin out our unique moodSense stack as a PaaS - entertainment platforms like Netflix and traditional media companies could screen-test their shows with greater detail using moodSense. Politicians could bring their platforms to greater audiences.
Log in or sign up for Devpost to join the conversation.