Inspiration Discord is increasingly popular among young people, but you have servers that are incredibly difficult to manage, often relying on an honor system. However, some of these servers sometimes devolve into name-calling and bullying, something that should be avoided to strengthen the community created by the server. This is where the idea was born.
What it does Safe Space is split into two parts. The first part detects bullying using artificial intelligence and has a variety of configurable options to change what behavior the discord bot exhibits upon detecting bullying, including but not limited to kicking the person and delivering a warning. The second half of this project is a chatbot which helps individuals with managing and improving their mental health.
How we built it For the chatbot, we used a GPT-4 model with a custom prompt. For the bullying detection, we used a custom model with over 8 thousand data samples which have not been previously publicly used on the Kaggle platform. The data was first cleaned and also includes custom data points to increase the accuracy of the model in detecting bullying. The discord interface uses the discord api and the customization options are built with html, css and js.
Challenges we ran into One significant challenge we had was choosing the correct data and cleaning it. A variety of options were explored, but the results were not what we expected. A data sample with over 100 thousand data points and a 48 thousand data point sample both served to be painfully inaccurate in detecting bullying in servers, often failing to recognize bullying unless incredibly overt. We also struggled with the customization dashboard, integrating the external site with the discord bot. This process took many hours.
Accomplishments that we're proud of Overall, we think we have made a great bot that could reliably be used in servers. The variety of options and configurable interface are both incredible, and this AI model is one of the first ones we had done successfully.
What we learned We learned quite a bit about building Discord bots and also using machine learning models, a surprising part of which is debugging, data cleaning, and data selection.
What's next for Safe Space In the future the discord bot could be expanded to other platforms, we could further improve the data used to power the ML model, and we could improve the configurability to also let the user select how selective the bot is in bullying detection.
Built With
- discordjs
- express.js
- logisticregression
- natural-language-processing
- node.js
- numpy
- python
- scikitlearn
Log in or sign up for Devpost to join the conversation.