Many people suffer some form of depression or anxiety, which is one of the biggest health risks of today's society due to its influence on suicide rates. When we are alone, we do not have access to friends or family that can potentially lift us out of moody periods, which may put us at risk. According to this study (, when people are alone, it is possible for people who feel down to feel better based on listening to specific music categories, which simulates the emotional relief of someone else comforting them.

Vision and speech are powerfully fundamental and accurate means of assessing human emotions. These metrics evolved to guide our ability to recognize emotion in our friends and in our families in times of happiness and in times of distress so we may offer up the support they need. In fact, we as humans are often able to determine the emotional states of others based on either one of these factors alone. Thus for those who suffer from depression or unhealthy thoughts by themselves, it is, therefore, a life-saving step if we can algorithmically identify those who are feeling down and try to cheer them up in their everyday computing.

We realized that with the tools given to us by Microsoft Azure, namely facial recognition and speech sentiment recognition, we are able to fulfill both of these requirements and provide a virtual analysis into someone's emotional state whom may be suffering trauma, and combined with some intelligent mood matching algorithms, provide some stress relieving music to sooth the mind of the person, thereby improving one's emotional health state and potentially even saving a life.

What it does

We have deployed two means of capturing our user's activity in order to identify those who may be in emotional distress. MoodTunes extension scans web pages and gives these pages a sentiment rating based on the Microsoft Cognitive Services Text Analytics Sentiment Analysis API. Based on this rating, MoodTunes decides whether to engage in a dialog with the user about their mood. The user can let MoodTunes know that they are in fact not in a depressed mood or can confirm their feelings and MoodTunes will suggest some music to play that might brighten their day. If enough web pages with low sentiment ratings are visited, then an option for calling trendline (a mental health hotline), a friend, or a family member will appear.

MoodTunes mobile app is a proof of concept that we can use the selfie culture, something that almost everyone does, and turn it into a tool to fight depression. The MoodTues mobile app can take pictures from the gallery and the camera and determine the emotional state of the person in the picture using Microsoft cognitive services facial recognition. Based on the results, we pull information from Spotify to suggest the proper music for the mood to make people feel better.

How we built it

We built the extension using JavaScript and learned Ajax and how to parse JSON files. We used the Microsoft Cognitive Services Text Analytics Sentiment Analysis API for the sentiment analysis and the Twilio API for the calling function.

We built the app using Java and android studio by leveraging the camera hardware on a typical mobile phone to send pictures to Microsoft Cognitive services' facial recognition algorithms that determine emotions.

Challenges we ran into

Reading through brand new developer docs were challenging and learning JavaScript was challenging as well. We struggled with figuring out the best way for the Chrome extension to access the page contents and sent a POST request to MSFT Cognitive Services.

We also struggled with coming up with an idea: we wrote a TON down on a piece of paper and wrote up specs for each idea about how they would be implemented as seen on our GoogleDoc:

We had trouble with deciding the formats our extension would work in. We started thinking of popups, considered banners, and then we finally settled on making our own modals.

We then were divided into who wanted to sleep and who didn't.

Trying to make requests to MSFT cog services that were too long and didn't know how to cut them down.

We also ran into challenges with the mobile app in coming up with a user interface that would be intuitive. We solved this issue by looking to mimic standard camera apps as closely as possible.

Accomplishments that we're proud of

Providing a complete coverage of emotion recognition services on two different platforms. Writing our first web app for 3/4 of us!

figuring out how to use the different APIs

making new friends :)

Learning a lot and having fun!

about scope and how you have to make sure things are in the right scope: you can't create alerts in the popup html file because it's out of the popup's scope! breakthru!!

Really struggled with getting the name input to save.

What we learned

We learned a ton about JavaScript, html, and how callback functions work and got really good at looking through StackOverflow and Google along the way. There were a lot of moments where we overcomplicated what we were trying to do by overthinking it. For example, we learned how to make ajax post requests that are wayyyy simpler than going through node js and doing it that way. We also thought we needed a Spotify API when we could just embed the Spotify playlist in a new, smaller window.

We also learned how to make API requests from Android via access tokens to Microsoft's Cognitive services.

What's next for MoodTune

In the future, we want to make our Chrome extension and app more customized to the user (name, playlists, family/friends' numbers). We could also incorporate the Spoitfy API to MoodTunes so that after logging in to their Spotify, they can directly change from MoodTunes the playlists they want linked to certain emotions and even the individual tracks on each album/playlist. For the MoodTunes app, we want to work on implementing the app with the camera so that everytime someone takes a selfie or picture, the app will run and lift their mood!

Also lots more hackathons!!

Share this project: