What it does

GifBot uses a webcam to capture a live frame of the user's facial expression to determine their emotion, then it returns a viewable gif that reflects that emotion.

How we built it

GifBot uses a combination of Python scripts, Microsoft Azure's Face API, and a Microsoft Azure hosted Linux virtual machine to take a photo, send the photo to Microsoft Azure's Face API to analyze the characteristics of the photo, then by using Giphy's API, it searches Giphy's database using keywords based on the data returned from Microsoft's Face API.

Challenges we ran into

Networking: Microsoft's Face API uses an image's URL to locate the image, but the picture of the user is taken locally. Running a simple Apache server could alleviate the issue, but the University's firewall blocked this functionality. The workaround was to set up a Linux virtual machine on Azure, then use SCP to copy the local image onto the virtual machine.

Accomplishments that we're proud of

It works!!! We learned a lot about APIs in general and using multiple APIs to accomplish a task. We didn't settle; we finished what we started and we fulfilled our goals of the event.

What we learned

We learned a lot by struggling through using Microsoft Azure.

What's next for GifBot

Pushing it to mobile platforms; now that we have its basic functionality, we would like to try using React Native for allowing both Android and iOS development to bring it to the public.

Built With

  • giphy-api
  • microsoft-azure-face-api
  • python
Share this project: