Inspiration
We have all seen it happen. Someone shares an article, it spreads everywhere, and by the time anyone realizes it was false the damage is already done. Reputations get destroyed overnight. People make real decisions about who to trust, what to believe, who to support, based on content that was completely fabricated. And the scary part is most people sharing it genuinely think it is real.
That is what pushed us to build Verity. Not just that misinformation exists, but that the long term damage it causes is permanent and most people have zero tools to protect themselves from it. We went online looking for a proper fake news detector built for Canadians. What we found were websites telling you how to spot fake news, browser extensions that slap a red or green label with zero explanation, and tools so vague they are honestly not much better than guessing. Nothing rigorous. Nothing that shows its work. Nothing built specifically for Canada using Canadian standards. So we built it ourselves.
What It Does
Verity (from the Latin veritas, meaning “to seek truth”) is an AI-powered misinformation detection tool that actually explains itself. You paste any article URL or raw text and Verity scores it across 6 independent criteria derived directly from the Canadian Centre for Cyber Security's official ITSAP.00.300 framework. A real Canadian government document that defines exactly what misinformation, disinformation, and malinformation look like. We took that framework and built a scoring engine on top of it.
- Website Trustworthiness (20%) — domain age, TLD, typosquatting, MBFC reputation
- Sensationalism & Clickbait (20%) — outrage language, ALL CAPS, clickbait patterns
- Fact-Checking & Accuracy (25%) — core claim extracted and verified against reputable Canadian sources
- Author Verifiability (15%) — named byline, journalist credibility, source citations
- Content Quality (10%) — factual vs emotional balance, design integrity
- Threat Classification (10%) — Valid / Misinformation / Malinformation / Disinformation per ITSAP.00.300
$$\text{Verity Score} = (C_1 \times 0.20) + (C_2 \times 0.20) + (C_3 \times 0.25) + (C_4 \times 0.15) + (C_5 \times 0.10) + (C_6 \times 0.10)$$
Verdicts: Highly Credible (90-100) / Likely Credible (72-89) / Questionable (45-71) / Likely Misinformation (25-44) / Misinformation & Disinformation (0-24) / Undeterminable (opinion or subjective content flagged as N/A)
Every verdict includes a full plain-English breakdown of exactly why each criterion passed or failed, plus an ElevenLabs audio readout of the results.
How We Built It
Python and Flask for the backend, vanilla HTML/CSS/JS for the frontend. We designed the entire scoring system ourselves based on the ITSAP.00.300 framework, spending a lot of time making sure every criterion was defensible and explainable — not just a black box number.
We use newspaper4k and BeautifulSoup4 to scrape and extract full article content from any URL, then the Google Gemini API does the deep analysis across every criterion. One of our biggest technical wins was consolidating what would have been 6 separate API calls into a single optimized prompt that covers every AI-dependent criterion simultaneously, caching results by content hash so repeat articles cost zero additional API calls. All 6 criteria also run in parallel via ThreadPoolExecutor so analysis time never stacks. Backboard.io handles smart memory so previously analyzed articles come back instantly.
Challenges
Getting Gemini to produce consistent, well-calibrated scores across wildly different content types took serious prompt engineering. Web scraping was a constant headache — paywalled content, Cloudflare protection, and JS-rendered pages all needed graceful fallbacks. Trusted source logic was a real edge case we had to think through carefully — government sites like canada.gc.ca do not use personal bylines, which would normally tank the author score unfairly, so we built context-aware boost logic so institutional sources get scored accurately. Keeping scope tight under 36 hours of time pressure was honestly its own challenge.
What We Learned
How to turn a real government policy document into a working technical system. How to get reliable structured output from Gemini at scale. That the most impressive demos are always the ones that work every single time. And that misinformation is a deeply Canadian problem that genuinely deserves a Canadian solution.
Built With
- backboard.io
- beautiful-soup
- elevenlabs
- flask
- gemini
- html/css
- javascript
- newspaper4k
- python
Log in or sign up for Devpost to join the conversation.