Inspiration

Many AI models are trained on images scraped from social media without explicit consent. Once an image is online, it can be reused endlessly without the subject’s knowledge. Individuals have no way to “opt out” of AI training after posting content publicly.

What it does

Manipulate the image such that to humans it seems the same, but not to AI. Our photos look normal to humans but confuse AI models. Subtle pixel tweaks and metadata layers make automated recognition unreliable. Metadata added to inform models not to train on the image.

How we built it

User Uploads Image: The image is read into memory using Pillow. Step 1: LSB Noise is applied. This involves adding slight noise to the image’s pixel values (based on a chosen intensity). This noise is meant to make it harder for AI models to process the image accurately. Step 2: Adversarial Noise is applied. This noise is stronger, simulating the effects of adversarial attacks on machine learning models that could make them misinterpret the image. Step 3: Metadata is injected. A custom warning message ("DO NOT USE FOR AI TRAINING") is embedded into the image’s metadata to further discourage its use for AI training or other purposes. Returning the Image: After applying the layers, the modified image is saved to an in-memory byte stream and then sent back to the user as a downloadable file.

Challenges we ran into

Getting it to work with JPEG images

What's next for PixelGuard

Speed Effectivity with LLMs Additional features: optional watermark Making it work with JPEG images

Built With

Share this project:

Updates