Inspiration
We live in a world where everyone accuses the media they disagree with of being biased but nobody has a good way to prove it. We think about trust and verification constantly. You don't just trust a system you audit it. We wanted to apply that same thinking to news. We're tired of people living in echo chambers not because they're bad people, but because they don't have the right tools. Polarity was born from a simple question: what if instead of one AI telling you whether an article is biased, you had five AI agents each with a completely different political worldview debate it?
What it does
Polarity analyzes any news article through five AI agents, each representing a distinct political perspective:
Far-Left — focuses on systemic inequality and power structures Left — emphasizes progressive policy and social fairness Center — prioritizes balance, pragmatism, and neutrality Right — values personal responsibility and free market principles Far-Right — emphasizes national identity and traditional authority
Each agent independently scores the article on a scale from (-10) (far-left) to (+10) (far-right). The final bias score is the average: Bias Score=15∑i=15si\text{Bias Score} = \frac{1}{5} \sum_{i=1}^{5} s_iBias Score=51i=1∑5si where (s_i) is each agent's score.
How we built it
Backend python# Five agents run concurrently with staggered delays tasks = [ evaluate_agent("Far-Left", FAR_LEFT_PROMPT, article_text), evaluate_agent("Left", LEFT_PROMPT, article_text), evaluate_agent("Center", CENTER_PROMPT, article_text), evaluate_agent("Right", RIGHT_PROMPT, article_text), evaluate_agent("Far-Right", FAR_RIGHT_PROMPT, article_text), extract_red_flags_and_summary(article_text), ] results = await asyncio.gather(*tasks)
FastAPI — Python backend with async agent orchestration Google Gemini API — powers all five political agents Supabase — caches results so repeated URLs don't re-call the API Google Fact Check API — cross-references articles against known fact checks
Frontend
Single-page HTML/CSS/JS app with a clean editorial design
Challenges we ran into
The biggest challenge was API rate limiting. Firing six concurrent Gemini calls per analysis exhausted our free tier quota quickly. We solved this by staggering agent calls with delays: pythondelays = { 'Far-Left': 0, 'Left': 0.8, 'Center': 1.6, 'Right': 2.4, 'Far-Right': 3.2 } await asyncio.sleep(delays.get(agent_name, 0)) Getting each agent to return consistent JSON was also harder than expected. Gemini would occasionally wrap responses in markdown code fences, so we built a cleaning layer to strip formatting before parsing.
Accomplishments that we're proud of
We're proud of the multi-agent architecture using five competing AI perspectives to reach a consensus is genuinely novel. Most bias tools use a single model or a static database. We built something that mirrors how real editorial bias review actually works, with multiple perspectives checking each other. We're also proud that we shipped a working web app in under 24 hours, with a real backend, caching, and API integrations.
What we learned
Prompt engineering is as important as the underlying model. The difference between an agent that returns clean JSON and one that wraps everything in markdown is entirely in how you write the prompt. We also learned a lot about designing systems that are resilient to API failures — if one agent fails, the others should still return results.
What's next for Polarity
Overlay bias scores directly on news feeds via the Chrome extension, build a source reliability database for known outlets, and make Polarity the standard tool for media literacy education.
Built With
- css
- fastapi
- google-fact-check-api
- google-gemini-api
- html
- javascript
- python
- supabase
Log in or sign up for Devpost to join the conversation.