Harassment blocker was created as part of the #HackHarassment program, with the intention of preventing or reducing online harassment.

The program uses an IBM Watson API designed to analyse the tone of an provided message. By using this API, our program takes messages, posts, and comments online and is able to return a Boolean value based on the different emotional levels of the text - either the post is suitable or it should be blocked. This was achieved by using a Python Flask back-end to carry out this function, and a java-script google chrome extension client-side to allow access in the browser.

The primary challenge was accurately identifying the posts that were suitable as opposed to the posts that were classified as harassment and so should be blocked. This was achieved using a comparison of the results given by IBM Watson's Tone analyser, results are given as different levels of emotions in the provided text - for example Fear or Anger. By comparing the positive emotions to the negative ones, we were able to identify which posts were harassment and which were not with a reasonable level of accuracy.

In the process we have become experienced with the creation of Google Chrome Extensions as well as the difficulties of identifying negative content online.

To futher the harassment blocker project, ideas would include implementation in other browsers, and most importantly, development of our own algorithms to identify harassment rather than reliance on a outside API. This for instance would be extremely interesting to implement using Machine Learning in the future.

Share this project: