Inspiration
What is the purpose of music? For many people, this is a difficult question to answer with infinite, multifaceted dimensions. One could listen to give themselves a confidence boost before a big game, to remind themselves of a loved one who passed away, to ease the struggles of socializing in a large setting, to study, etc. It could be all of them but at different occasions. However, we at the DEVergents team find that at the core of music lies the emotion. It has the nigh-universal ability to evoke, alter, suppress, and enhance a certain emotion in a certain context. It is one of the reasons why we believe music has become such an integral part of modern consumption and media, especially short-form content such as TikTok. The centrality of music in media today lies in its memetic ability to transcendentally translate meaning and emotion across cultural, linguistic, generational, and temporal lines. For us at DEVergents, this thesis was the central value that we carried into this project.
We hoped to provide a product that could isolate, identify, and share the emotions that music carried with it. Many people use music to enhance the experience of living within a singular moment or context. For example, you hoped to find a song that captured the subtly nostalgic, melancholic isolation of nightly reading at 3 am or the social, energetic, summertime feeling of beach time, we wanted to build a website that provided just that. The most successful TikToks have been able to synergize the content of the video to the song because music served to enhance the emotions that the creator was hoping to convey. I challenge you to go on any liminal space or core.core side of TikTok and not find a video that perfectly encapsulates the unease of imperfect familiarity or the loneliness of nostalgia and growing up, respectively. These videos achieve this using sound.
We aimed to provide a website that would allow users to instantly find the songs that fit the mood or emotions they were hoping to enhance or convey. Through research and our personal experiences, we found that many current systems such as Spotify only provide single word descriptions for “moods” and bias heavily based on genre. Try to find the perfect reading at 3 am song on Spotify and you will hit a dead-end. There are only popular playlists that are returned by name association, which might have a couple songs that fit the bill but are ultimately not right for you. Or maybe, you have a song in mind and use the radio function only to find that, although the first one fits the bill, the other recommendations get muddled by factors such as associations or genre. Or potentially, you have a feeling in mind, but have the inability to find the right words to describe it. If you aren’t even able to begin to verbalize your emotion, then ChatGPT and Spotify search would likely be even worse. These troubles only get worse for those chasing more obscure moods and emotions.
I recently listened to “Million Dollar Baby” by Tommy Richman, hoping to find songs that evoked the similar sense of energetic rebelliousness that the song oozes. only to have been met by recommendations of Chappell Roan’s “Good Luck, Babe!” and Sabrina Carpenter’s “Espresso” for the sole reason that they were popular on TikTok at the same time.
This is one of the reasons why I believe that Spotify has been so bad at predicting the virality of certain songs on TikTok.
What it does:
We built our recommendation algorithm along this purpose: help users find these songs based on the moods and help users quantitatively verbalize the mood they were looking for.
Our idea for a new recommender system would personalize songs for users through a defined mood that was personalized. After all, different people would find different songs to possess different emotions and be right for different contexts. While my friends might enjoy rage phonk at the gym, I’m a heavy NewJeans gym enjoyer, for example. While the execution fell short in many areas, our vision was to use a modified collaborative filtering algorithm to recommend songs based on moods. We hoped we could use a kernel function to find the pairwise cosine similarity between each song in a user’s profile based on values they inputted for 50 unique qualifiers. Then, we aimed to use a collaborative filtering algorithm using factorization machines to handle the multi-dimensional sparse matrix that laid each user 2D square pairwise cosine_similarity matrix against each other. We’d then be able to predict the possible distance between two songs based on these qualifiers for any given song. (inpsiration at link)
However, we ran into computational and database difficulties (this was my first time using SQL after all) and were unable to fully implement this algorithm in the time we had. As of now, the app is dependent on a randomized algorithm.
Secondly, we wanted to make the decision making process for classifying these moods easy. We took inspiration from apps such as Beli and Tinder that essentially “gamified” ranking systems through a swiping mechanic. We aimed to similarly gamify the arguably worst part of our app’s UX, data collection for personalization.
We identified 50 of the most common qualifiers from Pitchfork reviews using a web scraper using a one-v-one swiping system to help users define more hard to verbalize “moods.” This would populate the qualifiers that are associated with each song. Users would choose between which qualifiers “better” describe the song they are listening to. A binary decisions makes it easier from a UX point of view.
How we built it
We built a webpage using Django (Python3) as our backend with a SQLite database, and HTML, JavaScript, and CSS as our frontend.
Challenges we ran into
Given that this was all our first project, we ran into as many problems as we spent minutes working on this project. We all met in a Theory of Algorithms and a Math Theory for Machine Learning class at UChicago; and, as students at a university that prioritizes theory over all else, this was all of our first times developing – well – just about anything.
Back end challenges: APIs certainly have a learning curve to them, and none of us had a ton of frontend experience before this. Additionally, none of us have any experience with databases or database designs, so it was certainly a heavy learning experience. Additionally, some of our team was in Hong Kong at the time and didn’t have access to certain critical resources such as ChatGPT for boilerplate assistance and TikTok. Additionally, getting the whole thing to work together seamlessly was a large part of the battle.
The algorithm was another massive challenge both to learn and implement. While we all came from a Machine Learning class together, we didn’t know how to implement half of the algorithm we learned about as it was a math theory class primarily. We further ran into computational limitations with space and time complexity for the proposed algorithm with the early stages taking over 30 minutes to complete for just 1000 songs, 100 users and 10 features. This made testing very difficult, and, when we ran into database structure conflicts with the algorithm design later on in the project, we ended up being unable to resolve it in time for the submission date.
Overall challenges: Communication and delegation of responsibilities was a large difficulty for our team, not just in terms of timezones. All of us are CS majors with very little project management experience. We knew generally the workflow was between front and back end, but failed to really know how to properly merge and work those two halves together. We struggled a lot in communication between the teams that led to some difficulties in certain dependencies between the teams. This became especially prominent due to the difference in expertise in the different fields. We were all completely new to these languages and frameworks, so the back-end didn’t end up learning html and the front-end didn’t end up learning django, which led to some challenges. Additionally, we made the mistake of separating back-end from algorithm design, leading to the design of the algorithm veering further and further from the design of the databases as the project wore on.
However, these are all amazing learning opportunities that the DEVergents team is sure to learn from when we take on another project.
Accomplishments that we're proud of
We feel that the website is both user-friendly and contemporary in its design, while still having a strong backend at its core to handle user requests/information. In particular, we're proud of the friends feature and the "Find Songs" page. We’d like to especially commend the Front-End team for their effort and dedication in designing a beautiful website. They learned html and css from scratch and had little design direction to start from, but they managed to fully realize the vision of the team.
What we learned
Coming in as first time developers, we learned a lot from our respective responsibilities as well from each other. It was an amazing time to get to learn and grow with everyone on the team! We learned a lot about frontend tech stacks, as well as UI/UX design! We learned django, databases, and SQL! We learned recommendation algorithms and the math behind them through our research. We strongly believe that we came out of this project stronger and more prepared moving forwards.
What's next for DEVergents
We'd like to eventually fine-tune the mood-matching algorithm to match our vision, and add new features that would allow for more user input so that users can have a fully customizable experience. We recognize that the 50 current qualifiers are not collectively exhaustive in terms of being able to describe moods. Furthermore, we aim to have better data to train our models on. Additionally, we hope to deploy the project to the internet using web/cloud service.
Built With
- argparse
- beautiful-soup
- css
- django
- html
- javascript
- ntlk
- pandas
- python
- spotify
- tiktok
- wordnet
Log in or sign up for Devpost to join the conversation.