Wanted to build an app for people who are suffering from Asperger's and Autism.

What it does

An Android app converts the user's speech to text, then interprets the text to determine the speaker's emotion. We used the IBM Watson's Tone Analyzer API to determine the strongest emotion conveyed by the text.

How we built it

Used Android Studio, Google Voice to convert speech to text ,and the IBM Watson's Tone Analyzer to determine emotion. Vokatori was also used to further refine the results.

Challenges we ran into

The app frequently crashed upon requesting the response from the IBM Watson API due to threading issues. Vokatori was also difficult to integrate due to its need for .wav audio files which Android doesn't natively support.

Accomplishments that we're proud of

It was first hackathon for all of our team members. The fact that we managed to have a finished project that is working makes us proud.

What we learned

How to integrate different APIs into different work settings.

What's next for YouMadBro

There is a time limit for voice recognition. We want to control voice recognition running time until the user stops talking.

Share this project: