The spread and propagation of fake or sensationalized news has long led to unwanted consequences, such as the polarization of society and the declaration of wars. In today's society, the easy access to information has led to a large amount of easily available but false or unnecessarily sensationalized news stories which have led to the creation of echo chambers. What we seek to do with our extension is to help people look for red flags of sensationalized news, such as overly charged language in order to help empower them in their information-gathering process.
What it does
Our hack is a Chrome extension; when the extension is activated, users can click a toggle to check for "charged vocabulary" (more categories to come) and language that may be emotionally charged or biased will be highlighted.
How we built it
We used NodeJS, AWS's Comprehend service (for features like detectSentiment and detectKeyWords), and a few JS libraries like Browserify.
Challenges we ran into
Accomplishments that we're proud of
So much learning! See below:
What we learned
We learned a lot about NodeJS, as none of us had worked with it before. Actually, none of us had ever developed a Chrome extension or worked with AWS, either - all of the technology we were using was brand new to us, so we're psyched we were able to get as far as we did with this hack!
What's next for SKAD
First of all, actually getting it to be fully-functional. Right now, it's not 100% connected to AWS, and there's a bunch of text-processing we still need to code up - we know what we're going to do, it's just a matter of hooking it up to the AWS functionality, first. Additionally, we'd like to add other categories to look for - highlighting any mention of topics that are trending right now, notifications about whether or not any other news sites have reported similar stories, and possibly more advanced NLP models to better detect bias, since AWS's models can't be re-trained to focus specifically on news sources.
Most importantly, we want each highlighted term or notification to link to a brief description of why the detected bias could be a problem. This extension isn't about giving a binary "it's biased"/"it's not biased" answer - it's about helping people identify possible biases and determine for themselves whether or not the source is trustworthy. If users can get a better understanding about why charged language can be harmful, they'll be better at recognizing it in the future.