What it does

Detects toxic language and filters it out in Discord Servers using natural language processing from Cohere

How we built it

We used Cohere's NLP for toxicity detection and Discord for the bot.

Challenges we ran into

When testing the initial version of our bot, we realized there were several toxic phrases it wasn't detecting. We solved this problem to an extent by training the model with another dataset we found online, which contained tweets with a variety of toxicity values. Even after implementation of the new model, the bot wasn't picking up every toxic message.

Accomplishments that we're proud of

We're proud that we were able to use Cohere's NLP to create something that makes online spaces safer.

What we learned

We learned that no model will work perfectly. Even with hundreds of thousands of data entries, a model will not be able to perfectly identify every instance of toxic language.

Built With

Share this project:

Updates