Inspiration

All of our teammates enjoy listening to a variety of music. After some research, we discovered that music can play such an important role in mental health by helping to relieve negative emotions such as stress and anxiety. Music has the ability to evoke positive emotions and create a sense of connection to others who may have experienced similar emotions to make us feel less alone. Listening to music appropriate to listeners’ current mood has been shown to reduce levels of cortisol. We want to create an app that can be beneficial to the users’ mental well-being by offering them comfort and an escape through music by personalizing song recommendations in an enjoyable experience.

What it does

We created a mobile application which prompts users to enter moods or situations they are experiencing at the moment and returns a personalized Spotify playlist. The user inputs is put into our predetermined script for a list of song recommendations based on emotions detected from the input. The playlist is sent back and presented to the user, along with the option to view a brief, informative description about the positive effects of music on mental health.

How we built it

The frontend of our application was built using React Native and contains three main pages for the user to visit. On the first page, the user can type out how they are feeling or a situation that they may be experiencing at the moment that then leads to an alternate page with a Spotify playlist made to boost your mood! This would be done by sending a prompt to OpenAI via an API POST that would convert the text into keywords that would be sent to Spotify’s API to get songs related to those keywords. A button on this second page then leads to the third page, where the user can find comforting messages and useful information about the positive effects of music on mental health. A button can also be found on the final page to bring the user back to the start of our app, restarting the process whenever you want!

Challenges we ran into

Ideally, we would have created an API with an endpoint in OpenAI so that a user can enter a custom emotion and/or situation that is sent to the chatbot for a completely personalized playlist of songs, but the majority of the challenges we faced were found within the integration of API's into the rest of our application, specifically, how to visualize in the frontend what was called within the API's in the backend. We also would have liked to integrate a button in our app that can lead to the user’s Spotify or Apple Music accounts. Otherwise, minor challenges were faced when learning how to use Javascript for the first time and navigating React Native.

We discovered that while we understand the basic principles of a language, mainly loops and simple data structures in C, it is much more difficult connecting front and backend files when using more complex frameworks and programs, in this case it was React Native, API's, and Javascript. It was made clear that producing industry-ready products will take much more experience to gain familiarity with a variety of languages and platforms.

Accomplishments that we're proud of

Collectively, we are proud of the multidisciplinary aspect of our application by connecting music and mental health. We are happy with the design of our application, its functions, and the results it produces. Regardless of the challenges, we kept supporting each other as a team. We learned as much as we could along the way from workshops and mentors and gained experiences from using languages and technologies. The majority of us were not previously familiar with mobile app development, seeing the progress made and challenges overcome is great.

What we learned

We learned how to implement the user interface and different functions of a mobile application using the React Native framework. We learned how to use a variety of styles to make our app appealing and minimalistic. Although we could not get the data displayed, we learned how to make GET requests to Spotify and POST requests to OpenAI. In testing, we made our own custom API using Express.js and hosted it on the local network, which our app could call. Several members of our team emulated various Android and iOS phones to test the functionality of our app.

What's next for Emotify

We would like to add on more human-computer interaction features with a more diverse interface to provide a safe therapeutic space and a more enjoyable experience for those that would ultimately rely on the platform. Another feature we were really interested in adding is to connect personal Spotify or Apple Music accounts to personalize playlists more based on songs users enjoy normally. We'd need to build another API with an API key from a Spotify developer's account with some more research.

Built With

  • javascriptom
  • openai
  • openaiapi
  • reactnative
  • spotifyapi
Share this project:

Updates