Inspiration

This project was originally intended to use the SnapKit library, and export a GIF of the user to a sticker in the Snapchat app. We thought it would be fun to have the ability to send animated GIFs of ourselves to our friends.

What it does

The app takes a series of images and sends them to our API endpoint. Our API segments the images, removing the background leaving only the subject. It also emojifies the images, adding an emoji that best represents the subjects emotion over the subjects face.

It then stacks the images and returns a looping gif.

How we built it

The program consisted of an Android application written in C#, and a Python server (Flask). The Android application was responsible for taking a sequence of images and sending them to the server for processing. The server would then segment the user from each of the frames using a deep learning API (Face++), and export a looping GIF back to the mobile device. Image processing on the server was done primarily with the Python Imaging Library (PIL). The server additionally had the option to ‘emojify’ the user, where the Google Cloud Vision API would identify the location and expression of the user’s face in each frame, and replace the head with an emoji equivalent.

Challenges we ran into

We had trouble applying the Google vision API on the segmented images. Once the images where segmented the API would not detect any faces. We just chose to return a segmented gif, and an emojified gif.

Another challenge would be the latency of the Google Vision and Face++ APIs. This makes our endpoint very slow especially with a larger amount of images.

Accomplishments that we are proud of

In order to produce the final animated GIF, we had to overcome many novel challenges. The fact that we were able to create the ‘sticker’ we envisioned feels like an accomplishment in itself.

What we learned

We learned how to do image processing on gifs with the PIL library in python.

Built With

Share this project:

Updates