Inspiration
Misinformation is no longer limited to fake news articles โ it spreads through forwarded WhatsApp messages, voice notes in regional languages, edited images, and short viral videos.
In India especially, misinformation often spreads through:
- ๐ฒ Forwarded text messages
- ๐๏ธ Voice notes in local languages
- ๐ผ๏ธ Manipulated images
- ๐น Short-form social content
Many users donโt verify content โ not because they donโt care, but because verification tools are complicated, slow, or inaccessible.
This inspired us to build FactChecker-AI โ a modular, AI-powered platform that not only detects misinformation but also educates users to think critically.
๐ก What the Project Does
FactChecker-AI is a multimodal misinformation detection & media literacy platform that analyzes:
- ๐ Articles & URLs
- ๐ Uploaded documents
- ๐ผ๏ธ Images
- ๐๏ธ Voice notes (regional language support)
It provides:
- Credibility scoring
- Contextual AI explanations
- Highlighted propaganda techniques
- Structured reporting to authorities
- Educational gamified verification modules
Instead of just saying โFakeโ or โRealโ, it explains why something may be misleading.
๐ง Key Modules
1๏ธโฃ Veritas Analyzer (Core Engine)
- Accepts text, URLs, files, and images
- Extracts content
Uses AI to:
- Detect exaggeration
- Identify emotional manipulation
- Flag propaganda patterns
- Suggest verification steps
2๏ธโฃ ๐๏ธ Voice Note Analyzer
Many misinformation campaigns in India spread via voice notes.
Workflow Structure:
Audio Input (Upload)
โ
Speech-to-Text Conversion (STT Engine)
โ
Language Detection & Normalization
โ
AI Misinformation Analysis
โ
Credibility Score + Explanation
How it works technically:
- Audio file is uploaded (mp3/wav/m4a)
- Speech-to-Text API (e.g., Deepgram / Whisper alternative) converts audio โ text
- Cleaned transcript is passed to AI model
Model analyzes:
- Fear-based messaging
- Political propaganda
- Financial scam patterns
- Urgency manipulation
System outputs:
- Risk score
- Highlighted suspicious phrases
- Explanation of misinformation tactics
This makes the system usable even for users who donโt consume written misinformation.
3๏ธโฃ ๐งพ Authority Reporting Hub
Users can generate structured reports to:
- Cybercrime portals
- Election Commission
- RBI / SEBI (financial misinformation)
- PIB Fact Check
This transforms passive detection into actionable reporting.
4๏ธโฃ ๐ฎ Learn to Verify (Gamified Media Literacy)
Instead of only detecting misinformation, we also:
- Teach users how misinformation works
- Provide scenario-based quizzes
- Offer AI feedback on reasoning
The goal is long-term awareness, not dependency.
๐๏ธ How We Built It
Tech Stack
- Frontend: React + Tailwind CSS
- Backend: Node.js / Serverless APIs
- AI Layer: LLM-based contextual analysis
- Speech-to-Text: External STT APIs (Deepgram / Whisper alternatives)
- Deployment: Vercel
Architecture Approach
We designed the system to be:
- Modular
- Scalable
- API-driven
- Model-agnostic (can switch AI providers)
Each analyzer module functions independently but shares a unified scoring logic.
โ๏ธ Challenges We Faced
1๏ธโฃ Speech-to-Text API Limitations
- Token limits
- Quota restrictions
- Some models restricted to specific cloud environments (e.g., Vertex AI)
- File upload vs live mic recording inconsistencies
We experimented with:
- Web Speech API (live only)
- Deepgram (file issues)
- Whisper (quota problems)
This taught us how production AI systems must handle:
- Rate limiting
- Model fallback
- API abstraction layers
2๏ธโฃ Hallucination Control
LLMs sometimes:
- Over-assume misinformation
- Fabricate verification sources
We reduced this by:
- Structured prompts
- Constrained outputs
- Scoring instead of binary labels
3๏ธโฃ Multilingual Complexity
Voice notes in regional languages require:
- Language detection
- Translation normalization
- Cultural context awareness
This is still an area for future improvement.
๐ What We Learned
- AI detection alone is not enough โ explanation builds trust.
- Voice-based misinformation is under-addressed.
- Model abstraction is important (never depend on one provider).
- UX clarity matters as much as AI accuracy.
- Fighting misinformation is as much about education as detection.
๐ Future Vision
We aim to expand FactChecker-AI with:
- Browser extension integration
- WhatsApp chatbot deployment
- Real-time misinformation heatmaps
- Community-based credibility voting
- Federated misinformation reporting system
๐ฏ Impact
FactChecker-AI does not just analyze content. It empowers users to:
Think critically. Verify responsibly. Share consciously.
In a world flooded with information, clarity is power.
Built With
- api
- css
- gemini
- html5
- javascript
- node.js
- vercel
Log in or sign up for Devpost to join the conversation.