Inspiration

We wanted to create a tool which can help combat cyberbullying in communities such as discord.

What it does

It is a discord bot that can detect how toxic a user is on a server from their past messages and can generate a word cloud to illustrate their most frequent words, along with their sentiment. We also added some fun extra features such as using machine learning to detect what MBTI personality type the user is from their chat history.

How we built it

We built it in python using discord.py, nltk, perspective API, text2emotion, sklearn

Challenges we ran into

We faced many challenges from the start. We found the APIs alone were not too accurate in judging toxicity and we experimented with several different APIs. We also experienced issues in being able to all simultaneously run our script since it was running different versions on different devices and there was only ever one bot running on the server.

Accomplishments that we're proud of

We're proud of sticking till the end :))

What we learned

We learnt the importance of planning and organising. We learnt more about the discord bot API and troubleshooting issues with installation etc. We also learnt that the current APIs for detecting toxic speech is not the best and has lots of room for improvement.

What's next for Toxicity Bot

We plan to improve our models by using more data and will also expand to more messaging platforms.

Built With

Share this project:

Updates