Inspiration

The idea was to explore the possibilities of using deep learning in a hackathon project. Turns out that many pre-trained models exist within the field of object detection. Being easy to implement, we decided to use this model in python to do something that is fun for us and out of this world.

What it does

Our program uses a dynamic quantity of images containing the face of a single person to compare with the faces detected in a sample video. The user has control of the directory containing images and the sample video to check if the person in the video is who they appear to be. The program will show you frames as there being analyzed with highlighted boxes of the faces and a percent authenticity display on the top left corner. After a full analysis, the program displays the average percentage of authentic face detections and total frames analyzed.

How we built it

We used Python in order to work with dynamic datasets and a functional computer vision library. We split the code into two scripts. One ran the user interface powered by the Tkinter library, while the other operated the facial analysis using a pre-trained model of human faces. Every time the user clicked a button to set files and perform analysis, the script would call a function from our built recognizer library.

Challenges we ran into

The libraries themselves required many dependencies that needed to be present on the host computer. Installing them using Python's pip tool came with its own compilation problems and required many compilers to be installed through the terminal. Our program was also limited by the heavy amount of processing power the model used to compare faces. Therefore, videos could not be too long ( > 1 min) and analysis was not in real time (speed was twice as slow). The complexity of our code was also an issue as semantic and runtime errors were hard to debug.

Accomplishments that we're proud of

We were amazed at the reliability of the pre-trained model. In the first three hours, it was not only able to detect faces from a webcam but match the faces to the images with high precision. We are also proud of how the project turned out overall in terms of functionality. Most of our expectations were met as core functions worked with few bugs. At around 3 AM, the face comparison code worked flawlessly.

What we learned

A lot of our team is new to team collaboration as we are starting out our college careers. We learned how to use version control software such as git more effectively with branching. It really helps us work individually on our code while being able to merge it neatly in the end. We also learned about the complicated field of artificial intelligence and the capabilities it can bring to our projects. One surprise for us is the processing power required to analyze each face. We were lucky to have a workstation laptop to preview our frames for demonstration purposes. We also learned more about the tedious task of debugging and to organize and document code in Python.

What's next for Veriface

The next step would be creating a full webapp that incorporates the inputting of faces. This would make is user-friendly. After that would be creating a keybase.io system where one submits faces of themselves, and could periodically upload 'proof of life' videos. They could also link this profile to various social media. When doing a Reddit AMA, instead of posting a picture of themselves as proof, they could submit a video on their 'variface' profile.

Built With

Share this project:
×

Updates