Note: Future UI design in Figma linked below!
Inspiration
The famous saying "where words fail, music speaks" reminds us that music helps us find artistic expressions that validate our experiences. Music and other forms of art can decrease stress by giving us a new perspective on our feelings, promoting healing and understanding.
For example, Teikyo University in Tokyo did a study on the impact of music on listeners who were going through challenging emotions. They discovered listening to music allowed listeners to project their personal feelings and struggles into the music. During the case study, they found that when clients listened to “sad” music, they experienced more pleasant feelings, thereby decreasing pain and allowing them to express the sad feelings that we sometimes try to avoid in life.
We wanted to build off of this idea of artwork as a venue for expressing and exploring mental health. We felt this was a perfect way to create an original product devoted to safety and promoting the mental health community, which is often overlooked.
What it does
Simply submit a CSV file of your messaging history (easy to request with Discord and other apps) and we'll use Vader Sentiment Analysis along with our own custom built ML model (XGBoost with 80%> accuracy) to classify your mental health condition for each message and get generalized results across your history. We'll plot these results for you and give you an updated CSV file that you can view in browser to see what analyses we came up with!
Then, you can download a MIDI file that is generated note-by-note to represent how you're feeling. We also generate a caption based off of the tone of your text that tries to use figurative language to describe how you're feeling without medical or mental health jargon. This caption is then used to generate an image that artistically represents your mental state.
You can share this on social media and help spread awareness about mental health too!
How we built it
We used Python to create the application and deployed models with Streamlit. We used a variety of relevant libraries such as music21, tensorflow, scikit-learn, NLTK, etc. and incorporated products from both Open AI and Huggingface.
Our UI
We think our simplistic and minimalistic design promotes good accessibility. It's also fully responsive on all devices with different dimensions. Light and dark mode are also available. We also incorporated some vaporwave aesthetics into our design, since we're heavily focused on art and creativity. The bright color palette in our logo is also consistent across our branding and application (for example, the mental health distribution chart).
Challenges we ran into
The OpenAI API key kept getting leaked! The config.py file we used to store this was not working with .gitignore properly, so I ended up using streamlit's st.secret method instead. We also had issues with conflicting dependencies but were able to solve this after some investigative work in the requirements.txt file. Working with and streamlining so many different ML models was also a challenge.
Another big challenge was generating music since no one on our team had experience with music21, so there was a lot of documentation reading!
In terms of UI and presentation, we really wanted to promote accessibility but we also wanted to have our own color palette and logo. Finding something that accurately represented our product took a lot of work and experimentation, but we're happy with the final product! Most of us were really new to Figma as well.
Accomplishments that we're proud of
We're proud of building a fully functional application! with an original idea and design!
What we learned
We learned a lot about what it's like to be a part of a hackathon and how to take on collaborative software projects.
What's next for Music Mood
We want to expand our UI even more and incorporate a chrome extension for even easier access!


Log in or sign up for Devpost to join the conversation.