Pictionary, with an AI twist!
You, the drawer, create a drawing of a certain word or phrase. Your friend, the guesser, tries to guess what that word is, by looking at your image. However, a Convolutional Neural Network trained on hundreds of thousands of images is also in the running - and if the AI can guess the image, you both lose!
Bad Flamingo isn't just a game. The robustness and security of machine learning algorithms are becoming increasingly critical as AI systems make important decisions in work, life, and play. Crucial to understanding ML security are adversarial training examples, which are training examples that humans easily identify but puzzle machines.
Most adversarial training examples are generated by adding imperceptible noise to the training examples. In Bad Flamingo, however, users must draw fundamentally alternative images to fool the ML classifier. These proposed semantic* adversarial examples test relations between captions and images that humans can understand, but ML models cannot. Continued usage of Bad Flamingo will thus generate more difficult datasets for modern ML models and encourage research in robust, human-like computer vision.