Background

In today’s world, the internet is a dangerous place for online communication. Scams, Child Predators, and Cyberbullies populate nearly all chatrooms.

Moreover, data shows that these users have only been on the rise over recent years!

And worse of all, most online communication sites simply don’t have the monetary or technical resources to implement protections against scammers, bullies, and predators.

Our Solution

That’s why we created CyberSafe, a RESTful service API for chatrooms that uses modern NLP technology in order to combat malicious messages!

Architecture

For companies looking to use our service, the architecture is simple: They send messages to our API, which runs it through a custom model that is designed to detect spam, bullying, and predatory activities all in one swoop!

Challenges we ran into

Generating child predatory data was a bit tough. We had to scavenge websites for usable data and build custom web scrapers for them. However, we ended up gathering 4800+ messages to train our model on!

Accomplishments that I'm proud of

The idea itself is extremely monetizable and there's a definite demand for it. Minecraft servers, multiplayer children games, chatrooms, etc. all face the threat of malicious activity in their messages and want it gone. At the end of 24 hours, we're proud to say that CyberSafe has turned into an easy-to-use implementation of detecting harmful users and making cyberspace a better place!

What we learned

Throughout this journey, we've improved our understanding of how NLP can be made at a production level and implementing it with sockets. That was definitely the coolest part!

What's next for CyberSafe

We really want to improve our model to not just consider a single message but analyze entire conversations. That would reduce the number of false positives and may drive up the number even higher from 95% accuracy!

Built With

Share this project:

Updates