During the COVID lockdown last October, I came up with the idea to create a "computational costume" that transformed an image such that a human would see nothing wrong, but an AI would see a spider or some other spooky image. This led me to start working on adversarial privacy methods, ultimately leading to the inspiration of this project to counter facial recognition and surveillance systems.

What it does

This model attempts to trick MobileNet_V2 into thinking an image of a face is actually an image of a spider.

How we built it

We built this model as an implementation of the UNet Architecture (, then trained it with a custom loss function.

Challenges we ran into

Training time and hyperparameter tuning were two massive problems we had to encounter, specifically because due to a large number of custom systems, the number of hyperparameters was quite large. Additionally, the complexity of the task demanded far greater training time than what was available in the hackathon, which was further exacerbated by the Kaggle notebook deleting all the model files ~2 hours before hacking ended.

Accomplishments that we're proud of

We were able to create a system that was successfully able to trick Mobilenet into not predicting a human face, however, we weren't able to fully get the targeted misclassification to work, so Mobilenet did not predict a spider.

What we learned

We learned a great deal about how adversarial privacy worked, how convolutional neural networks work, how autoencoders and residual networks work, as well as the semantics behind facial recognition (region proposal networks, MTCNN, ResNet, etc.).

What's next for Best_Costume_Ever.ipynb

In the future, we foresee this project being applied to new datasets, such as correcting societal imbalances learned from common datasets. For instance, our network may transform images in such a way that certain racial biases are eliminated. Additionally, we hope to further improve the architecture with more information about the targeted model (gradient of cost with respect to input image), as well as improvements via reinforcement learning (attempting to fool a surrogate model learned from ensembling existing models).

Built With

Share this project: