Perspectivity – OpenNewsInsightFramework is an open-source, plug-and-play platform that lets any community turn scattered local headlines into a real-time, bias-aware news intelligence dashboard—no matter how low-resource the language. The framework ingests articles and social snippets from Google News Regional and grassroots feeds, normalises mixed scripts, and pipes the text through a multilingual AI “Bias-Agent” that compares framing against a crowdsourced outlet-tag database. A built-in web studio allows journalists, researchers, and civic volunteers to drag-and-drop publishers onto political or ideological sliders, instantly enriching the ground-truth labels the model learns from. The result is a live map of story clusters, sentiment, and ideological spread, complete with one-click citations so users can verify sources on the spot, and a JSON API so NGOs or data scientists can pull the same insights into their own tools. Perspectivity closes the information gap for languages the internet forgot, giving local citizens a way to see every side of every story while offering fact-checkers and academics the first scalable bias-analytics engine for underserved regions.
Inspiration
When Cyclone Hamoon hit Bangladesh last year, contradictory reports flooded social media faster than any official update. Friends in Nigeria and Nepal said the same thing happens during elections and health scares—their languages simply aren’t on the radar of global fact-checking tools. We realized the problem is universal: the internet’s “long-tail” languages have no bias monitors, no multi-perspective aggregators, and no way for citizens to see who’s shaping the narrative. That pain point sparked Perspectivity – OpenNewsInsightFramework: an open-source kit that any local newsroom, NGO, or civic hacker can spin up to reveal every side of every story, even in the world’s most under-served languages.
What it does
Perspectivity ingests headlines and social snippets from Google News Regional, RSS feeds, and public Telegram channels; normalizes scripts; and streams them through a multilingual Bias-Agent that scores sentiment and ideological slant. A drag-and-drop “Political-Tag Studio” lets volunteers calibrate each outlet’s leanings, which the agent then uses as ground truth. The web dashboard shows: • real-time bias heat-maps across districts, • side-by-side columns of left/center/right takes on the same story, • citation links back to every source, • and a JSON API so researchers can query topic clusters and bias metrics on demand.
How we built it
We containerized a FastAPI back-end that calls Perplexity’s Sonar Reasoning Pro for up-to-the-minute Bangla (and other low-resource) search with trusted citations. Headlines are pushed into a PGVector store and passed through a lightweight multilingual mini-LM fine-tuned on 2 M labeled sentences. The Political-Tag Studio is a Next.js app that writes outlet-bias labels to Postgres, triggering on-the-fly retraining via Celery workers. Finally, a React/Tailwind dashboard streams Server-Sent Events so users see partial results in under two seconds—even on 3G.
Challenges we ran into • Script chaos: Bangla, Roman-Bangla, and Arabic script often mix in the same feed; we built regex-based transliteration fallbacks. • Bias ground-truth scarcity: Only 40 Bangla outlets had known political tags, so we crowdsourced labels and built a semi-supervised bootstrap. • Latency vs. cost: Deep-dive calls to Sonar are pricey; we introduced caching with a two-hour TTL and queued heavy jobs to keep within the hackathon credit limit. • Neutral design: Visualizing bias without being biased meant endless color-palette debates until we landed on symmetric blue-gray-red scales.
Accomplishments that we’re proud of • Streaming a fully localized, citation-backed bias dashboard in under three weeks. • Achieving a micro-F₁ of 0.82 on headline-level bias detection for Bangla—state-of-the-art for any low-resource language we could find. • Open-sourcing the Political-Tag Studio so other communities can replicate our pipeline without writing a line of back-end code. • Onboarding two pilot partners: a Dhaka university media lab and a Nigerian fact-checking NGO, both already running test instances.
What we learned • Retrieval-augmented generation beats training giant models from scratch, especially when citations are mandatory for trust. • UI trust cues (showing source logos, hoverable citations) matter as much as ML accuracy; people won’t believe a black-box bias score. • Community labeling is powerful: 27 volunteers tagged 300 outlets in 48 hours once the UX was friction-free. • Building for low-bandwidth first (3G phones) forces better architectural decisions that benefit everyone.
What’s next for Perspectivity - OpenNewsInsightFramework 1. Language packs: Extend the pipeline to Hausa, Amharic, and Nepali within six months. 2. User fact-checks: Let readers flag suspect headlines; feed that signal back into bias scores. 3. Disaster-alert API: Push district-level rumor spikes to NGOs in real time. 4. SBIR Phase I proposal: Seek federal R&D funding to formalize our low-resource language models and make Perspectivity the default bias-analytics layer for every overlooked language on the web.
Perspectivity’s mission is simple: if a community can read it, they should also be able to see through it.

Log in or sign up for Devpost to join the conversation.