We were inspired by popular Instagram and Snapchat filters and wanted to see if we could create a makeshift version of them in only two days.

What it does

Face Filter uses live facial detection and image processing operations and places a filter onto each face detected by the device camera. Through a number of calculations, it determines the optimal placement, scale, and angle of the filter mask dependent on each individual face.

How we built it

We used python and some of its libraries (cv2, scipy, numpy) to create our facial recognition algorithm as well as the filter meshing, rotation, and application. We used an open-source facial detection framework that already implemented a lot of cv2 functions; we expanded on the framework by adding filters, rotations, and resizing.

Challenges we ran into

The rotation of the filter images were especially tricky as there were often false positives when detecting the number of eyes on any given face, so it was difficult to determine a set angle at which to offset the filter to align with the angle of the face. It was also difficult to determine the directionality of the filters due to the unpredictability of the cv2 function that located the eyes.

Accomplishments that we're proud of

This is our first hackathon, so we are especially proud of being able to create and complete a relatively complex project given our inexperience. We were completely new to many of the development/collaborating tools that we used (python, GitHub, git), so learning and using them successfully in such a short period of time was especially rewarding. Overall, we managed to overcome many obstacles and learned vital information that we hope to apply to future endeavors.

What we learned

During this event, we were able to learn the basics of creating a program in python and some fundamentals of GitHub and git, which allowed for version control and cooperation between team members.

What's next for Face Filter

Given more time, we would create a user interface to improve the usability of our program. Currently, the user must change filters manually by editing the file and typing in the respective asset file path. The UI would allow the user to visually select between several different filters and possibly customize them.
Share this project: