Team members
- Ang Ming Liang id:116
- New Jet Jie id:103
- Chua Bing Quan id:119
- Toh Bing Cheng id:90
Inspiration
If you use social media, you should know about the 10-year challenge, where people compare profile pictures from 10 years ago and now. All these data, YOUR DATA, could be mined to train facial recognition algorithms on age progression and age recognition. Think of the mass data extraction of over 70 million US Facebook users during the Cambridge Analytica scandal. This is a major concern of data privacy.
What it does
Applies adversarial patches and noise to reduce effectiveness of public machine learning scrapers and fight against modern invasion of data privacy.
How we built it
We build a Chrome extension that “masks” your photos, so that machine learning algorithms classify them wrongly, yet appear completely the same to the human eye. The user is given 2 choices in the chrome extension: apply an Adversarial Patch to his/her image or evenly distribute Adversarial Noise across the image. For the Adversarial Patch, the extension adds a patch on the image provided that causes a general object recognition classifier to misclassify an image.[2] The patch is made by applying expectation over transformation, over a random area of the image. For the Adversarial Noise, the extension makes quasi-imperceptible changes to the image such that a machine learning model classifies it incorrectly by using a single gradient ascent step, also know as a "fast gradient sign method".[3] While both approaches typically require a known existing neural network architecture to compute their adversarial attacks, a.k.a. a white box attack, they have been shown to work well on black box neural network architectures as well.[3] Text input is converted into an image before applying the Adversarial Attack.
Challenges we ran into
- The first problem we faced was not being able to crop the Adversarial Patch, due to auto-croppers misclassifying the patch. We resorted to outsourcing to a human. (Thanks Li Ying!)
- When implementing the papers, Ming Liang encountered his greatest frenemies Linear Algebra and Keras. They presented the greatest challenge for him in this hackathon, especially the part using projection matrices to project the change vector to a hypersphere s.t. the radius of that hypersphere is less then a pre-defined value gamma. He died there.
- After studying 1 paper the night before and getting a crash course from Ming Liang, Jet (Jun Jie) faced difficulty trying to balance the requirements between adversarial machine learning code and web development fundamentals (from Bing Quan). He was stuck with linking web requests with ML outputs, and survived only until 6am. But he learnt a lot, at least :)
Accomplishments that we're proud of
Each of us is proud of something we did in the Hackathon:
- Ming Liang: Proud for implementing 1+ research papers in a day.
- Jet: Proud of learning adversarial attacks, web protocols and OpenCV.
- Bing Quan: We're proud of Bing Quan for teaching Jet and Bing Cheng web and pipelining most of the project. (He's asleep right now)
- Bing Cheng: We're proud of Bing Cheng for settling most of the user input data and user interface of the Chrome extension. (He's still working on it right now)
What we learned
- Ming Liang: Adversarial examples, he read 4 papers at the Hackathon, and Keras. Keras was surpassed his expectations.
- Jet: Adversarial machine learning theory, web protocols, OpenCV, and... piano?
- Bing Cheng: Many, many, many web... stuff.
What's next for AI Blockers
Our A.I. Blocker performs less effectively against some adversarial defenses, e.g. Robust Optimization and Certificates.[4] To overcome this, we can apply 2nd-order gradient information in our future work to generate better-performing adversaries against these defenses. One feature that we did not manage to fully implement was a Universal Adversarial Attack to pre-compute and transform the images in a O(1) constant-time operation for the adversarial step.[1] However, this comes at the expense of less robust A.I. Blocker. Other than improvements to the Adversarial Noise approach, there is also room for improvement for the Adversarial Patch approach. One such improvement can be the use of clever geometry to blend the patch into the image more seamlessly, such as an Adversarial Frame around text. There are also better software engineering practices to implement the backend to reduce the number of times that TensorFlow, VGG and DenseNet are loaded in.
References
- "Universal Adversarial Perturbations." 9 Mar. 2017, https://arxiv.org/abs/1610.08401v3.
- "Adversarial Patch." 27 Dec. 2017, https://arxiv.org/abs/1712.09665.
- "Explaining and Harnessing Adversarial Examples." 20 Mar. 2015, https://arxiv.org/abs/1412.6572.
- "Towards Deep Learning Models Resistant to Adversarial Attacks." 9 Nov. 2017, https://arxiv.org/abs/1706.06083.
Log in or sign up for Devpost to join the conversation.