Inspiration

When we build stuff on the web, spam and obscene content inevitably follows. But people working to keep web platforms clean for us often suffer adverse effects to their mental health, such as PTSD-like effects, anxiety, depression, and even coming to believe conspiracy theories they're tasked with removing. The Trauma Floor

What it does

CleanCore provides a public api for developers to assess the obscenity of a snippet of text. It can catch hate speech, profanity, and discriminatory language.

How I built it

I was able to train a neural network using Tensorflow and Keras on a proxy task (sentiment analysis on Amazon reviews, 300k records) and achieved an 85.4% accuracy (humans usually have around 82% agreement on sentiment analysis tasks--so this is basically as good as a person).

I then found a smaller data set of labeled Wikipedia article comments (10k records) labeled for toxic language, discriminatory language, and hate speech, and was able to fine tune the neural network, and achieved ~98% accuracy on test sets, and ~94% accuracy on a validation set.

Challenges I ran into

Managing dependencies is difficult in pip! Google AppEngine requires pip!

Accomplishments that I'm proud of .

I'm solving a real problem with machine learning!
That accuracy ain't half bad!
I got good results with a limited dataset because I was clever!

What I learned

I've never used Google AppEngine before and now I'm up to speed on it.
I've never fine tuned a neural network and now I have.
I had limited exposure to Keras and now I'm more familiar with it.
I've gotten a little bit better at managing environments with pip.

What's next for CleanCore

Ideally the model would be able to not only return a probability of general troll-ness, but also a specific reason (i.e. profanity, hate speech, etc).
Maybe building a public front end.

Built With

Share this project:
×

Updates