Inspiration
In today’s digital environment, misinformation spreads faster than verification. With the rise of generative AI, it has become increasingly difficult to distinguish between authentic reporting, manipulated content, and fully synthetic media. We were inspired by the idea of creating a digital investigation tool a tool something that treats online content like a case to be examined rather than information to be passively consumed.
We also wanted to address a practical hackathon constraint: AI-powered verification tools can quickly become expensive due to API usage. That pushed us to think about efficient verification workflows and caching strategies.
What it does
Trust Issues is a browser extension that analyzes the content currently visible on a webpage and generates a credibility report.
When the user clicks Scan This Page, the extension:
- Extracts visible page text
- Identifies core claims
- Verifies claims against trusted news sources
- Performs AI-generated content likelihood analysis
- Detects emotionally manipulative language patterns
- Produces a summarized investigation report
The extension displays:
- Credibility score
- AI-generation likelihood score
- Manipulation risk score
- Key findings from the scan
How we built it
We built a Chrome extension popup interface using:
- React
- Tailwind CSS
The backend handles verification logic using:
- Python + FastAPI
- Gemini API for reasoning and summarization
- News/search APIs restricted to trusted domains
- Backboard.io for caching and models
Challenges we ran into
True AI-generated content detection is still an unsolved problem. We had to shift from “detection” to likelihood analysis using multiple signals, which was both more honest and more technically achievable. Verification pipelines can easily trigger multiple API calls per scan. Designing a caching strategy early using Blackboard prevented runaway API usage.
Connecting:
- content scripts
- popup UI
- backend endpoints
Required careful coordination between team members to ensure consistent data structures.
Accomplishments that we're proud of
We’re especially proud of:
- Building a complete browser-extension verification workflow.
- Implementing the content-hash caching to reduce API calls.
- Designing a cohesive noir-themed investigation interface
- Creating a structured credibility-analysis pipeline
- Producing clear, explainable scan reports
What we learned
We learnt that:
- Caching and system design matter as much as machine learning
- Browser extension architecture requires careful separation of responsibilities
- Misinformation detection is more about verification workflows than AI models alone
What's next for Trust Issues
Future improvements could include:
- Real-time scanning mode
- Larger trusted-source database displaying to the user the sources
- Reverse image search integration
- Collaborative misinformation reporting by other users
- Local lightweight AI-likelihood detection
On today’s internet, a little Trust Issues goes a long way.
Log in or sign up for Devpost to join the conversation.