Inspiration

Microsoft, Youtube and Facebook's battle with content moderation showed us that humans are often required to intervene in big-streams of images even at large companies. However, at smaller corporations and websites (i.e. VSCO), such content moderation is virtually nonexistent - this was evident in Tumblr's struggle with pornographic on the platform. As a result, children viewing most of the open web have the potential to view tons of images and content that they shouldn't be, a problem we were determined to solve.

What it does

It takes a static webpage and only loads the content that is safe-for-work, blocking all not-safe-for-work content from view.

How I built it

Used a PyTorch Inception V3 machine learning network.

Challenges I ran into

Cleaning data from online, finding enough nsfw images, training a powerful model in a short amount of time.

Accomplishments that I'm proud of

Achieved a 93% precision and a 96% accuracy on our model using a 150k image dataset.

What I learned

Running a multi-convolutional neural net on slow Wi-Fi results in model's taking far longer than expected to train.

What's next for Caramel

We plan on expanding the dataset size from 150k to a million plus, expanding the categories analyzed from two to every pornographic category, publish the extension to the Chrome App Store, market Caramel to parents and schools across the United States, and build a suite of tools over time to help any child explore the open web in a safe, sweeter way.

Built With

Share this project:

Updates