Inspiration

In todays hectic online landscape, toxicity and harassment can stop people from expressing themselves. I want people to be able to have conversations online, without feeling like they are being harassed.

What it does

This application takes in comments that the user wants to categorize. It can tell the user how likely it is that the comment falls under certain categories that are toxic.

How I built it

I built the machine learning model using sklearn, a python library. For the front end, I used flask to create a web interface to interact with the model. The visualizations were created using LIME (https://arxiv.org/abs/1602.04938).

Challenges I ran into

There were issues with the front end, especially with displaying the data visualizations. I was not able to get all the visualization to display on 1 page and resorted to using separate pages. There was also some difficulty with sklearn, but this was solved fairly easily due to the large online community on stack overflow and etc.

Accomplishments that I'm proud of

Getting the machine learning portion to work was something I am quite proud about. I am also very happy with how my front end turned out, particularly with how my data is shown.

What I learned

I am very new to machine learning, so I am very happy that I know how to use sklearn for machine learning , as well as using flask to interface with my back end.

What's next for Stance: Taking a Stand against Hate Speech

The next step would be to port it into an app. Another possibility would be to have it as a browser extension, or even a moderating tool that online forums can use to curb hate speech.

I also definitely want to try other more complex machine learning algorithms to improve the performance.

Built With

Share this project:

Updates