Inspiration
During a time like Covid, I tend to call my friends and loved ones often. However, during calls there are always misinterpreted messages and emotions. I wanted to better analyze these emotions to better support my friends during a time where mental health is at it's lowest. I believe that knowing your closed one's emotions can better benefit both of you when it comes down to solving the mental health pandemic on top of the Covid pandemic!
What it does
An audio recording is analyzed using sentiment analysis to determine how the speaker(s) from the audio recording are feeling. Data such as sentiment, opinions, and emotions is displayed for the user to determine the feelings of the speaker(s).
How I built it
When a user uploads an audio recording, the audio recording is stored in our AWS S3 Bucket. From there, once the submit button is clicked, it is then run through 3 different APIs. First, Azure's Speech to Text API converts the audio to text. Once the audio is converted to text, the text is then run through Azure's Sentiment Analysis API and also IBM Watson's Tone Analyzer. This gives us responses containing data about sentiment and emotions, which is then served to the frontend React server.
Challenges I ran into
Azure API gave me some trouble from time to time. There are still some bugs here and there, but I hope to fix those soon!
Accomplishments that I'm proud of
- Working through bugs
- Good team communication
- Using new technologies!
What I learned
How to use Azure API, S3 Buckets and IBM Watson Tone Analyzer
What's next for MoodBot
- Fix some bugs
- Get a chat working (this was discussed early on in the project) that can be analyzed by Moodbot



Log in or sign up for Devpost to join the conversation.