Inspiration

Video calls such as Skype, Google Hangouts, and WeChat have revolutionised communication, in business, socially, and in many other applications. Yet despite the huge uptake, there remains a lack of support for the visually impaired. We believe that the user experience of the visually impaired would be greatly enhanced by providing feedback through facial recognition and interpretation of the expressions of the people they are talking to.

What it does

eMotion aims to address the lack of support for the visually impaired by harnessing a Microsoft Cognitive API to provide real-time information on facial expressions during a video call. The API interprets a still image of the video call and calculates the likelihood of different emotions. A customisable sound-effect is then played to inform the user of the most likely emotion.

How we built it

The peer2peer connection is established and maintained using WebRTC, Node.js, and socket.io. Video streams are embedded in the html code using Javascript, which also powers the back-end.

Challenges we ran into

A key issue was that domains that we created were unable to stream video because the location was not considered secure enough. Socket.io is very hard to implement on cloud systems. The Microsoft API was incredibly useful, but involved a steep learning curve.

Accomplishments that we're proud of

We are proud of how quickly we were able to adapt to challenges outside of our usual comfort zones by trying out new technologies and languages.

What we learned

Coming from a wide variety of backgrounds, we were each able to learn a great deal about each other's skillsets.

+ 2 more
Share this project:
×

Updates