Media uploaded on social networking sites or on the web is frequently mined for facial recognition training and mining. In order to retain some semblance of privacy, this project seeks to apply adversarial noise on photos of faces, to fool well known facial recognition algorithms.
There are two ways to combat facial recognition software - Attacks during training time and prediction time. Since facial recognition software are largely black box models, it is hard to attack these models. This project assumes a "best effort" attack, by attacking prediction time white box models instead.
What it does
This project generates adversarial attacks using the Fast Gradient Sign Method (FGSM) against Deepface, a whitebox face verification model and applies them onto the input image. The attack translates over to real world facial recognition algorithms well and is imperceivable while fooling most facial recognition software.
We start with detecting facial features using MTCNN via this implementation. The facial landmarks are used to create a mask that selectively allows adversarial noise to be applied to facial features. Facial recognition models use facial features in their latent space and can thus be fooled if adversarial noise is applied to these features.
How I built it
The software is a command line tool as well as a provided jupyter notebook that creates a new file with the mask applied on top of the original image.
Challenges I ran into
- Running GPU heavy compute makes creating a server very difficult. I eventually created a command line as well as jupyter/collab notebook interface to generate attacks. On average, generating a masked image takes ~4 minutes, on a Tesla K80 GPU server.
Accomplishments that I'm proud of
- Masked images of celebrities that were indistinguishable to the human eye could successfully fool Clarifai's celebrity recognition tool as well as Google Reverse Image Search.
What's next for Patchy McPatchface
- Making a scalable server
- Offloading compute to the browser using tensorflowjs
The responsible AI considerations can be found at this link. It's a best effort analysis of the Responsible AI factors that govern the use of this software as well as the tool it uses and interacts with.
Video recorded here