Over the summer, a few of our teammates learned about the Microsoft Emotion API, which detects emotions. This YouTube video provides the inspiration for our project.
Put simply, we wanted to make a more affordable version of the glasses demonstrated in the video.
What it does
Emotion Assistant allows the blind to read facial expressions of people around them in real-time. Using a computer, users can take photos and get a read-out of expressions of others.
Challenges we ran into
Our initial idea involved a GoPro camera, which could then be mounted to a headband to gather facial images. This would have provided an intuitive experience: rather than moving a webcam to read people's emotions, one would simply have to move their head.
The problem with the GoPro camera was that in order to wirelessly capture and process the image, we had to automatically transition between multiple wifi networks. Though trying workarounds with hotspots, network bridges, and Intel Edisons, we could not find a reliable solution.
Accomplishments that we're proud of
We're proud to have successfully utilized the Cognitive Services API. It was an API we had all been interested in using since we first heard about it. Uncommon Hacks provided an excellent opportunity to try it out.
What we learned
We learned about the great difficulties of scripting hardware, namely cameras. We also learned how to use a couple of new Python and Ubuntu packages:
fswebcam for scripting image capture, and
pprint for JSON processing.
We also learned a great deal of useless information about the annals of the GoPro.
What's next for Emotion Assistant
Cross-platform capability. We'd like to enable usage on OS X and Windows--we simply need to find libraries that can control web cameras on these OSes.
Extended development of headset. Using a Raspberry Pi, we'd love to consolidate computational tasks to be self-contained to just the headset. By doing this, a laptop will not have to be around.