The Challenge and Inspiration More than 1 million children reported being bullied on Facebook in 2016. Bullying takes many forms, and can result in long lasting trauma. Hate speech is the most virulent form of bullying. The rise of online hate speech is not a new phenomenon, however, modes to engage users who are victims of hate speech, online bullying and other social traumas are limited. New models to address this issue are needed. ‘Spongebot’ is built on best practices designed by first responders for supporting people with no safety nets, or access to trained professionals to respond to this challenge.

Enter: 'Spongebot' The ‘Spongebot’ is an automated messaging system that will educate, engage and ultimately support the victims of hate speech, online bullying, and other social traumas in order to face their fears. 'Spongebot' is designed to counsel users based on best practices through Facebook Messenger. Spongebot helps users addressing hate speech, bullying and threats on Facebook communities, and learns from the users interactions training data.

The Build We will implement and use an NLP based framework to train the bot on users expressions in order to effectively tailor responses through a Facebook Messenger integrated system. The framework will also allow our model to use Facebook Analytics in order to tailor and understand the use of the bot throughout the testing period.

Our Challenges Building safe online environments requires a piecemeal and methodological approach. The ability to get lost in the challenge is quite easy, hence building specific tools to address specific sub-issues of the larger problem, e.g. “the proliferation of hate speech on Facebook platforms,” requires addressing the needs of those affected. We attempted to address large social issues with a myriad of privacy issues that made it complicated to address all of the possible challenges. We began by trying to address privacy concerns but ran into a number of issues. We similarly tried to address individuals espousing hate speech, but ran into how to identify users hiding behind private posts. So we designed a system that can address a need through the platforms technical capabilities and that could be mainstreamed and supported by Facebook Messenger.

Accomplishments Our model is one that utilizes automation and machine learning to empower individual users to cope when they are bullied, confronted with hate speech, or online threats. This will ultimately enhance the ability of vulnerable Facebook community members to respond to hate, bullying and social traumas, and can likely be scaled up to be leveraged during traumatic community events, and after emergency notifications are enabled on the platform.

Learnings It was a great opportunity to develop connections, and specifically work with communities effected by hate speech and online bullying. This allowed us to bring together people with technical skills working alongside those with community organising capacities. We learned that a diverse team, with a diverse series of skill sets can really work better to address the problem.

Next for 'Spongebot' The ‘Spongebot’ builds on models of engagement designed by trained professionals and counselors on how to cope with bullying, hate speech, and social trauma. The ‘Spongebot’ will enhance and scale the ability of first responders to conduct outreach and support users dealing with bullying, hate speech and social traumas. Our goal is to engage these members of the Facebook community, and ultimately scale ‘Spongebot’ to support communities suffering in the wake of public emergencies and without access to trained professionals, or counselors. From collecting the data from users interactions, we can adjust the machine learning model, so it can perform better and interact in a more accurate way with the user.

Share this project: