Our team decided to do the the challenge proposed by Mirum and JWT. This challenge was to score the emotional tone of speakers and the possible language tones of text and speech. We liked how straightforward and clear the guidelines were. This allowed us to properly brainstorm and do something realistic over the weekend.

What it does

emotionji is a user-friendly web page that asks the user for text input or to upload a .wav file. It will then determine what emotion the user exhibited (neutral, joy, suprised, fear, sad or anger) and how certain it was of its prediction.

How we built it

We used various web tools and APIs, notably IBM Watson's Tone Analyzer API, Google Speech API, Iconic API, Audio Tokenizer, Natural Language Tool Kit, as well as JSON, Django 2.0 (released only a day before the hack deadline!), AJAX, jQuery, HTML/Javascript and Python.

Challenges we ran into

We ran into a lot of challenges involving integration of back-end and front-end in web development. Many of us had to learn various frameworks and web tools at the hackathon. We struggled the most with GET and POST requests and how to display this data in a user-friendly way using Javascript.

Accomplishments that we are proud of

We are proud with the fact that we have a great user-interface along with consistent and good integration of the APIs and tools used. Our project works and is cohesive, and at the end of 36 stressful hours, that in itself is an accomplishment!

What we learned

Many different tools we have never seen or used before. This weekend also proved how frustruating web development can be.

What's next for emotionji

We would love to keep working on this project in the future. We are close to integrating social tones and displaying user friendly information on various emotions depicted in large text inputs with multiple sentences, as well as being able to distinguish different speakers and their emotions through a .wav input.

Share this project: