Inspiration

A grade 8 girl experienced sexual harassment online when playing multiplayer games. Sadly, this is the story of many MMO gamers. Around 80% of MMO gamers experience cyberbullying, with women, those part of the LGBTQ+ community, and other minorities often targeted more frequently. The fact that this type of harassment and abuse still happens at such a high frequency in our current society, a society that is able to create quantum computers and travel to the moon, is troubling. Allowing everyone to feel comfortable when gaming is a goal that we strive to achieve.

What it does

Cybersafe analyzes voice and text input to provide a safety feature that censors potentially harmful topics. Cybersafe is a deep learning text classifier that is able to determine whether something someone says is bullying or problematic when gaming. Cybersafe immediately analyzes input, and uses AI to train itself on past and present data sets to provide immediate feedback to users.

How we built it

We split our project into 2 parts, the prototype and the pitch. We split up sub tasks between ourselves and put them together at the end. In terms of the prototype, we built our algorithm using tensorflow. The algorithm uses a LSTM model to classify whether a text is “problematic” or not. The dataset contains around 27000 texts in total, 13500 unproblematic texts, and 13500 problematic ones. We defined problematic texts as cyberbullying, the usage of slurs, and hate speech. In the end, the model had a 86% accuracy. In terms of the pitch, we used a variety of sources to analyze the problem and find a niche in which we developed our product.

Challenges we ran into

One challenge we faced was managing other commitments with time spent on this project. We overcame this by creating a schedule before starting our tasks - mapping out when people were available and when people had other commitments. We delegated tasks, splitting things up into individual portions and we dedicated time to group collaboration. This was a small challenge, but it was something we were able to overcome as a group. We think this experience will allow us to create a product more efficiently in the future.

Accomplishments that we're proud of

Finishing the machine learning algorithm was definitely a challenge, but one accomplishment that we are proud of was the amount of thorough research that we did to analyze the issue and solution. Cyberbullying is a really complex issue, especially over the topic of gaming, since the line between cyberbullying and verbal expression is a fine line for high intensity environments. Additionally, the problem of cyberbullying over gaming has been a long term problem, and current solutions do not efficiently address prevalent issues. We're proud of finding a solution that is operational, and has the potential to greatly increase safety in the gaming environment.

What we learned

This project was a chance to research more into a current issue, work on our project management skills, & improve our skills in ML. Through this hackathon, we learned a lot about the extent of harassment & cyberbullying between users when playing video games. Statistics painted an alarming picture and really illustrated how much work still needs to be done. The complexity of this problem gave us an opportunity to work on our research, time management, & collaboration skills. These are lifelong skills and we really appreciated being able to practically put them into use, improving ourselves in the process. Finally, this project encouraged us to further develop & implement our knowledge in machine learning & AI. We learned from one another as well as from other sources, expanding our repertoires.

What's next for CyberSafe

We plan on further training the AI, and integrating our software into a useable website to increase availability and reach.

Thank you for this opportunity and we really enjoyed PantherHack 2022,

Sincerely, The CyberSafe Team

Built With

Share this project:

Updates