Inspiration
Misinformation spreads faster than ever — not necessarily because people want to deceive others, but because content is engineered to trigger emotion, urgency, and reaction. Headlines filled with fear, shock, or outrage push people to share before verifying.
We noticed that most existing tools focus on labeling content as “true” or “false.” While helpful, that approach often feels confrontational, political, or too late in the sharing cycle.
We wanted to shift the focus from judging content to empowering users.
TruthLens was inspired by a simple idea: Instead of telling people what to believe, help them pause and think before they share.
What it does
TruthLens is a web application that analyzes short-form content — such as tweets, headlines, or WhatsApp forwards — and generates a structured Misinformation Risk Report.
When a user pastes text and clicks “Scan,” the app provides:
A risk score (0–100) A Low / Medium / High risk indicator Red flags detected in the language (e.g., urgency, emotional bait, vague sourcing) A verification checklist with actionable steps A neutral “share-safe” rewrite A concise one-line summary
Importantly, TruthLens never declares something “true” or “false.” Instead, it evaluates linguistic risk patterns and encourages responsible verification.
How we built it
We built TruthLens as a lightweight, deployable MVP focused on clarity and speed.
Frontend: Next.js (App Router), Tailwind CSS for a clean, modern dashboard, Dynamic UI components for risk score, badges, and chips
Backend: Next.js API route (/api/scan), Integrated a large language model via API, Designed a strict JSON-only response structure, Added validation and error handling to ensure reliable parsing
System Flow: User submits text --> Backend constructs a structured prompt --> The language model analyzes the text and returns formatted JSON -->The frontend renders the results in a clean dashboard.
We intentionally avoided databases, authentication, and complex infrastructure to keep the system simple, fast, and demo-ready.
Challenges we ran into
Controlling the AI output Large language models naturally try to classify statements as correct or incorrect. We had to carefully engineer prompts to prevent definitive truth labeling and focus only on risk indicators.
Ensuring structured JSON responses LLMs sometimes return markdown or additional commentary. We implemented strict formatting instructions and backend safeguards to maintain consistent JSON parsing.
Avoiding hallucinations We constrained the output to risk patterns and verification guidance instead of factual corrections to reduce the chance of inaccurate AI-generated claims.
Maintaining neutrality Designing a tone that felt supportive — not political or judgmental — required thoughtful iteration.
Accomplishments that we're proud of
A fully functional end-to-end misinformation risk scanner
Clean, professional dashboard UI
Reliable structured AI integration
Ethical design that avoids labeling content as true/false
Deployable live demo built within hackathon constraints
We’re especially proud that we built something that promotes critical thinking without restricting speech.
What we learned
Prompt engineering is critical when building AI-powered products. The framing of information dramatically impacts user perception. Guardrails are essential when deploying LLM-based systems. Simplicity in architecture accelerates development and improves reliability. Ethical AI design requires intentional limitations, not just advanced capabilities. Most importantly, we learned that sometimes the most effective intervention isn’t control — it’s awareness.
What's next for TruthLens
With more time, we would expand TruthLens into:
A browser extension that scans posts before publishing
Messaging platform integrations (WhatsApp, Telegram)
Source credibility scoring integration
Multi-language support
Pattern detection across viral misinformation trends
A research dashboard for misinformation analytics
Our long-term vision is to make reflective sharing the default behavior online.
TruthLens isn’t about deciding what’s true. It’s about helping people think before they share.
Built With
- css
- gemini
- github
- javascript
- next.js
- react
- tailwind
- vercel
Log in or sign up for Devpost to join the conversation.