What it does

EmoBuddy is an app that can automatically tell what kind of content you want to see, based on your emotions when looking at image content.

How we built it

The front-end/ mobile-side of the application was coded in Visual Studio, using Xamarin, a cross-platform framework in C#. This was responsible for streaming images to the server, as well as grabbing image content from wikipedia, and contacting IBM Watson to tag those images with meaningful labels.

The back-end was implemented in Python, using the Flask framework. It utilized the Microsoft's emotion api to do emotion recognition from facial images, and imgur's api to host images temporarily on a server.

Challenges we ran into

Https can make hacking more difficult than it should be, and Xamarin can be a bit buggy.

Accomplishments that we're proud of

We got to work with many new and fun APIs, and learned some fascinating things about syntax, ssl, web-hosting, and the fickleness of Visual Studio.

What's next for EmoBuddy

A fully functional app that generates web content (more like news feed) based on your facial expressions regarding a certain topic.

Share this project:
×

Updates