Inspiration

In today’s world, we’re surrounded by news articles, posts, and reports that often contain emotional, political, and exaggerated bias. These subtle influences can shape the way we think without us realizing it. We wanted to create a tool that helps readers cut through the noise, recognize persuasion tactics, and see text more clearly — hence the name Clarity.

What it does

Clarity is a browser extension that: Detects emotional, sensational, political, subjective, and exaggerated bias in text. Highlights biased phrases directly on the page. Displays an overall bias score, with a breakdown of how it was calculated. Uses a color-coded intensity scale (Low / Medium / High) so users can quickly understand the bias level. For example, if a news article overuses emotionally loaded words, Clarity highlights them and shows a score.

How we built it

Content Scripts: Scans the webpage text, applies a bias keyword dictionary with dynamic weighting, and computes a category-wise score. Popup UI: Displays results with gradient backgrounds, a collapsible breakdown panel, and interactive buttons (Analyze, Clear, Breakdown). Workflow: User clicks Analyze Popup sends message to content script Content script scans text and calculates scores Scores are sent back Popup displays results with highlights

Challenges we ran into

Designing an algorithm that balances simplicity with accuracy. Making the UI intuitive while still showing detailed breakdowns. Avoiding false positives when scanning text — common words can look biased out of context. Getting everything to work smoothly in a Chrome extension environment.

Accomplishments that we're proud of

Built a fully working Chrome extension with real-time text analysis. Created a bias intensity scale that’s both visual and easy to understand. Implemented a modular design so we can later expand with machine learning models.

What we learned

How to integrate content scripts with extension popups. The importance of UI/UX in data-heavy tools — the extension had to explain complex scoring in seconds. Bias detection is a nuanced problem; no system is perfect, but even simple approaches can empower users.

What's next for Clarity

Expand the keyword database and refine weighting. Add multi-language support. Introduce user-customizable thresholds so readers can define what “bias” means for them. Long-term: integrate machine learning models for smarter, context-aware bias detection.

Share this project:

Updates