Inspiration

With most of the world's communication existing online, cyberbullying and online toxicity has become a major issue. Most existing cyberbullying-prevention systems use hard-coded and pre-determined datasets of keywords to function. This creates a serious problem: quite frankly, online moderation systems are not very smart. Our project aims to address this issue by implementing AI-powered cyberbullying prevention, effectively making cyberbullying prevention smarter, preventing possible suicide and self-harm.

What it does

PAX detects harmful messages using multilayered AI processes and removes those messages. It DMs warnings and explanations to the instigator, and warns moderators, letting them take action with the click of a button, and notifying them of the instigator's history of offenses. Through multiple layers of protection, harmful messages almost never make it through,

How we built it

The first layer uses VADAR sentiment, the second layer uses the Perspective API, and the final layer uses the Gemini API. Discord functionality is done with the PyCord package.

Challenges we ran into

We ran into an error with VADAR and the LLM implementation. We eventually overcame these errors, but it took some substantial brainstorming and debugging.

Accomplishments that we're proud of

We learned were able to get a working submission, and we were able to create a product that accurately classified messages as harmful.

What we learned

We learned the process of creating Discord bots, how they work, and natural language processing tools such as VADAR and Perspective. We also learned how to use Live Share with VS Code.

What's next for PAX

We could add a web dashboard with authentication and improve the algorithm for classifying messages as harmful. We could also add additional features, such as auto-reporting users who continue to abuse the rules.

Built With

Share this project:

Updates