About the Project

The constant flood of sensationalism in the news wasn’t just frustrating — it was dangerous. I noticed how headlines were increasingly weaponized: not to inform, but to provoke, mislead, and manipulate. That was the starting point.

InfoShield was built as a direct response to that. A browser extension that doesn't just detect emotionally manipulative language — it flags it and backs it up with credibility scores so users can see the bias behind the words, not just the words themselves.

I built this using vanilla JavaScript and Chrome Extension APIs. I trained a simple keyword-based classifier backed by curated bias and sentiment datasets. The UI is minimal and purpose-driven — it's not meant to be flashy, just brutally clear.

What I Learned

Building this taught me how easily perception can be shaped with just a few emotionally charged words. On the technical side, I dove deep into browser extension architecture, NLP basics, and DOM parsing.

Challenges

Not all patterns of bias are easy to catch. Creating a lightweight but meaningful detector meant finding a balance between simplicity and impact. Handling dynamic web content (like Al Jazeera’s) also threw rendering bugs I had to resolve with smarter DOM scanning and regex handling.

In short: I wanted to give users a tool to reclaim control over how they interpret information — and keep the noise out.

Built With

Share this project:

Updates