As information and communication has become more and more widespread in the 21st century, so has the misinformation and malicious content that serves as the dark side of social media. We've seen countless conspiracy theories and misleading false facts pervade the public discourse, especially after horrific tragedies.

Case in point: after the Parkland school shooting, the number one trending video on YouTube claimed the entire incident had been a hoax. Today, when we checked the comments section of a CNN video concerning Sandy Hook, this was the first comment:

“CNN is a joke. Nothing Truthful comes from this source. One would need to see who owns this and the agenda they push with lies they speak. Nothing you post is accurate. Sandy Hook was fake. No one died, and you can not prove that they did. This has been FULLY deconstructed, and those with an uncalcified pineal gland see this truth.”

And moderators have no power to solve this issue. For good reason, they've been limited to viewing only four hours of disturbing content per day for their own mental health, but in the long run, this means the internet trolls can get away with almost anything.

That's where Paradigm comes in.

Paradigm is a service that intelligently analyzes social media comments and posts to flag and moderate harmful content. It has analyzed past examples of both benign and malicious behavior, then used that to build a model from which it can judge newer, previously-unseen comments. The possibilities for this are limitless, and to us, Paradigm represents the first step in eliminating the widespread prevalence of harmful misinformation. Paradigm not only symbolizes a shift in tech and public safety, but it is a way to ensure our digital world remains the place we want it to be.

Built With

Share this project:

Updates