Inspiration

A huge element of communication is nonverbal and conveyed through tone and body expression. We take this for granted, because our brains are able to automatically process it, but there are those who struggle to identify emotion and figurative language in daily life. Our goal is to use technology to bridge this gap in communication for those with social communication disorders, such as people with autism spectrum disorders. There are a lot of inventions designed to help people with physical disabilities, but less that help people with social communication disorders, and we want to change that.

What it does

Emotion Radar uses complex algorithms to do what our brain automatically does - it takes images of the speaker, processes them using complex algorithms, and feedbacks the emotions the speaker is feeling to the user using easy-to-understand graphic figures we call EmotiBoos. This helps the user accurately interpret the emotions of the speaker and minimizes misunderstanding.

How we built it

EmotiBoo's web app front end enables it to be opened on any device that has a browser, such as custom built hardware wearable with RPi, Microsoft Hololens, or just a smart phone. The architecture is set up with a server on Google Cloud to serve the web app and run the necessary encoding/decoding to interface with the 3rd party APIs we use. The IO was handled through the browser’s getUserMedia() function to access the webcam and use it to record input to be sent to the backend server. The server-side receives input, sends it to Microsoft Azure Face API and DeepAffects emotion recognition API, then standardizes and returns an output of emotions and weightings. The front end then receives the information and displays it using Angular as graphic figures (EmotiBoos) that resize according to which emotions are predominant in the speaker.

Challenges we ran into

  • Had to bridge React and Angular due to difference in developer preference
  • Hardware products were not compatible. Had to purchase and set up new RPI
  • Finding ways to output emotions in a way that could be easily understood by the user
  • Dealing with base64 to octet-streams in node that doesn't let you use Buffers

Accomplishments that we're proud of

We are proud that we managed to complete a complicated product within 24 hours despite the many challenges we faced and are glad that our product has the potential to have a positive impact on society. We are also proud to have all worked so effectively and efficiently as a team comprised of people of various educational backgrounds and skill levels.

What we learned

This was a great learning experience for everyone. Our team was comprised of people with a variety of different background and skillsets, such as front-end, back-end, design, and hardware. We all contributed to the project in our own ways and were able to work collaboratively with each other to make it work. We also learnt more about each others' area of knowledge by constantly asking questions and clarifying with each other about how each component of our project worked.

What's next for EmotiBoo

  • Text interpreters (similar to Grammarly) that can identify emotional cues in emails and instant messages
  • Integration to current wearable technology e.g. smart watches, Google Glass, Hololens
  • More advanced wearable technology would allow for new hidden, unobtrusive augmented reality that would let users wear social aids without anyone knowing e.g. contact lenses

Built With

Share this project:

Updates