Inspiration

The internet can be overwhelming or even distressing for some users due to unexpected or unwanted visual content. We wanted to build a browser extension that gives people more control over what images they see while browsing — whether it's due to phobias, trauma triggers, or personal preferences. Our aim was to make web browsing feel safer, more inclusive, and more personalized.

What it does

SafeWeb is a Chrome extension that automatically detects and blurs unwanted images on the web in real time based on your preferences.

How we built it

  • We used TensorFlow.js and the MobileNet model to classify images in real time.
  • The Chrome Extension was developed using Manifest V3.
  • A content script scanned and captured images on webpages, sending them to the service worker for classification.
  • We added a popup UI that lets users specify custom words to filter and choose from predefined categories (like animals, weapons, insects, etc.).
  • To expand the blocklist intelligently, we integrated Google’s Gemini API, which provides semantically related terms for better coverage.

Challenges we ran into

  • Reducing latency when classifying images in real time — we had to strike a balance between accuracy and speed.
  • Ensuring cross-website compatibility — DOM structures vary a lot across sites.
  • Integrating a lightweight model into the extension — most available models were large or outdated. There was only one model that we could readily integrate into the project.
  • Due to the time constraint, we couldn’t train our own model, and many existing ones were hard to work with.

Accomplishments that we're proud of

  • We're proud that this project ties into the UN’s goal of promoting mental health and well-being—because digital wellness matters too. The internet shouldn’t be a minefield. With our tool, we’re giving people back the choice to filter their web experience safely and sustainably.

What we learned

  • The importance of teamwork and splitting responsibilities efficiently under time pressure.
  • How to be realistic about feature scope, especially in a 36-hour hackathon.
  • The value of prototyping early, testing quickly, and iterating fast.

What's next for SafeWeb AI

  • Integrate newer models or explore an ensemble approach to improve classification accuracy and diversity.
  • Train our own model specifically fine-tuned on sensitive or disturbing content to better handle edge cases and improve precision.
  • Incorporate human feedback to allow the tool to adapt in real-time and become more personalized to each user's preferences.
  • Conduct user testing to evaluate effectiveness, gather insights, and drive iterative improvements based on real-world usage.

Built With

Share this project:

Updates