Inspiration
The idea for Blurify was sparked by the growing privacy risks faced by content creators and everyday social media users. We noticed how easy it is to accidentally share photos containing sensitive information such as like faces, documents, or personal details especially when posting online. With so much of our lives captured in photos, we wanted to create a solution that puts privacy protection front and center, without adding extra steps or hassle.
What it does
Blurify is a photo gallery app that automatically scans your images for sensitive content using on-device AI. Whenever you select photos, Blurify detects elements such as faces, documents, and license plates, then automatically blurs or encrypts those areas to keep your privacy intact. Before sharing or uploading, you can easily review flagged images and unlock or adjust the blurred areas as needed, ensuring you never accidentally overshare personal details.
How we built it
We built Blurify as a native iOS app using Swift to ensure optimal performance and seamless integration with the Photos app. Users can easily import their images directly from their device’s photo library. For privacy detection, we leveraged Apple’s Vision framework for robust text recognition (OCR), allowing us to extract any visible text from images. This extracted text is then analyzed by our fine-tuned text classification DistilBERT model, which determines whether the information is sensitive or not. In parallel, we employ our customized computer vision model, YOLOv12n, to detect and track sensitive visual elements such as faces, documents, and license plates within the images. By combining advanced OCR and computer vision techniques, all processed on-device, Blurify delivers fast and reliable privacy protection without compromising user data security.
Challenges we ran into
One of our biggest challenges was the constraint of running all AI processes on-device, which meant our models had to be both highly capable and small enough to fit within the memory limits of an iPhone. Initially, we experimented with compact large language models like DeepSeek-R1-Distill-Qwen-1.5B and Llama-3.2-3B-Instruct for text classification using prompt engineering. However, these models proved too slow in practice and delivered underwhelming performance out of the box. Given these limitations, as well as time and compute constraints, we pivoted to alternative approaches for text classification.
Another significant hurdle was defining what constitutes personally identifiable information (PII). The range of PII categories is vast, and within the tight timeframe of a hackathon, it was not feasible to build a comprehensive dataset or train models to accurately classify so many different types. As a result, we had to carefully scope our detection criteria and focus on the most common and critical types of sensitive information for our initial prototype.
Accomplishments that we're proud of
We’re proud to have built a solution where all privacy processing happens locally - meaning no sensitive data ever leaves the user’s device. The app delivers a seamless experience that doesn’t slow users down, and our detection engine reliably identifies and blurs sensitive content in real-world scenarios. We believe that Blurify sets a new standard for user-friendly, privacy-first photo management.
What we learned
Throughout development, we learned that while modern Small Language Models (less than 10B language models) offer impressive capabilities, they are still not sufficiently compact for efficient on-device use cases especially when it comes to real-time processing on mobile devices. However, the world of machine learning is vast, and we found that innovative solutions beyond SLMs can deliver effective privacy protection while fitting within the constraints of mobile hardware. Ultimately, we discovered that with the right approach, it’s possible to integrate privacy seamlessly into user workflows, maintaining both control and convenience without sacrificing user experience.
What's next for Blurify!?!
- Research more <3B Small Language Models and fine-tune them for our specific use case
- Utilize larger YOLO models and fine-tune to identify various text types in addition to objects
- Understand more deeply ways to squeeze out compute power on the iPhone to support Small Language Models efficiently
- Automated background checks in the app instead of having to manually upload photos and download after blurring
- Expand PII detection categories to cover more types of sensitive information
- Cloud-optional processing for users who want more powerful models with privacy guarantees
- Integration with social media platforms for seamless privacy-protected sharing
- Login Feature for the actual application of the encryption-decryption
Built With
- coreml
- huggingface
- swift
- tensorflow


Log in or sign up for Devpost to join the conversation.