Inspiration

The Netflix original documentary The Social Dilemna speaks to the growing polarization among people due to social media, and recent events only affirm that this is the case. The huge issue is that social networks are designed to show content that you like, that agrees with you, and that's on your side. On the internet, people refuse to listen to individuals who slightly deviate from their political stance because they have become so used to always being agreed with by their Facebook homepage. This is how radicalization occurs, how conspiracies spread, how violence is incited.

We are becoming more divided, more irrational, and more polarized. [^1]

To prevent radicalization, it's about diversifying your media and also being aware what type of media you are fed. Websites like Allsides already do a great job at this, they show you news articles written by the right, the left, and the center so you can get the full picture on an issue. But not all of us will actively check that site. In fact, social media outlets are where we can see much fake news, like Facebook. My goal was to make this easier by creating an easy extension for google chrome that can passively check - using ML - what side of the political spectrum a social media post lies, the possibility of it being fake news, and directing them to articles written from other perspectives if relevant. This way, users can keep themselves aware of their own biases and become more open-minded. As all hackathons go this didn't go perfectly to plan though.

What it does

PolarNize currently takes in a user input consisting of text they want to be analyzed for "toxicity". Then, a tensorflowjs machine learning model analyzes it and computes the level of toxicity in various fields, such as the amount of profanity, the tone, the bias, etc. Some of the test cases were quite interesting to say the least. Then, the extension computes a total overall score of toxicity to present to the user as well as a breakdown of all the categories to compute the score. The goal was the have the extension scrape the text off social media posts and news articles automatically, but time was limited for this functionality. The idea was that by analyzing the toxicity of an article or post before the user reads it, they can be warned about the content on the page, especially for explicit, hateful, or straight-up false knowledge.

How I built it

JS and HTML to create the chrome extension. I implemented a pre-trained ML model from the tensorflowjs libraries that detect the level of "toxicity" within a body of text. Learning how to integrate chrome extensions and ML capabilities took aa surpringly long amount of time.

Challenges I ran into + valuable lessons

This was my first time using machine learning in a project, although I had been exposed to the theory before. I also learned more about web dev which is definitely not my strongest suit but it is something that I look forward to learning more about!

Accomplishments that I'm proud of

This is the first hackathon I went solo for, and I'm just proud to have gotten out a working product. Even though my final product was a step down from my original vision, I still am walking away HTN with new knowledge on webdev and ML which is pretty cool.

What's next for PolarNize

This happens to be an issue I am passionate about, and I would personally use a chrome extension that had all the capabilities I dreamed of at the beginning of the hackathon. With more time, I could see this technology being essential to preserve individual thought. PolarNize definitely has room to grow and leverage ML to unite humanity and put us back on track.

[^1]:Not sure if I had to explain this, but nize happens to be a word with connotations of removing something, also toronto man slang, hence the name

Built With

Share this project:

Updates