Inspiration

Misinformation is no longer limited to fake news articles โ€” it spreads through forwarded WhatsApp messages, voice notes in regional languages, edited images, and short viral videos.

In India especially, misinformation often spreads through:

  • ๐Ÿ“ฒ Forwarded text messages
  • ๐ŸŽ™๏ธ Voice notes in local languages
  • ๐Ÿ–ผ๏ธ Manipulated images
  • ๐Ÿ“น Short-form social content

Many users donโ€™t verify content โ€” not because they donโ€™t care, but because verification tools are complicated, slow, or inaccessible.

This inspired us to build FactChecker-AI โ€” a modular, AI-powered platform that not only detects misinformation but also educates users to think critically.


๐Ÿ’ก What the Project Does

FactChecker-AI is a multimodal misinformation detection & media literacy platform that analyzes:

  • ๐Ÿ“ Articles & URLs
  • ๐Ÿ“„ Uploaded documents
  • ๐Ÿ–ผ๏ธ Images
  • ๐ŸŽ™๏ธ Voice notes (regional language support)

It provides:

  • Credibility scoring
  • Contextual AI explanations
  • Highlighted propaganda techniques
  • Structured reporting to authorities
  • Educational gamified verification modules

Instead of just saying โ€œFakeโ€ or โ€œRealโ€, it explains why something may be misleading.


๐Ÿง  Key Modules

1๏ธโƒฃ Veritas Analyzer (Core Engine)

  • Accepts text, URLs, files, and images
  • Extracts content
  • Uses AI to:

    • Detect exaggeration
    • Identify emotional manipulation
    • Flag propaganda patterns
    • Suggest verification steps

2๏ธโƒฃ ๐ŸŽ™๏ธ Voice Note Analyzer

Many misinformation campaigns in India spread via voice notes.

Workflow Structure:

Audio Input (Upload)
        โ†“
Speech-to-Text Conversion (STT Engine)
        โ†“
Language Detection & Normalization
        โ†“
AI Misinformation Analysis
        โ†“
Credibility Score + Explanation

How it works technically:

  • Audio file is uploaded (mp3/wav/m4a)
  • Speech-to-Text API (e.g., Deepgram / Whisper alternative) converts audio โ†’ text
  • Cleaned transcript is passed to AI model
  • Model analyzes:

    • Fear-based messaging
    • Political propaganda
    • Financial scam patterns
    • Urgency manipulation
  • System outputs:

    • Risk score
    • Highlighted suspicious phrases
    • Explanation of misinformation tactics

This makes the system usable even for users who donโ€™t consume written misinformation.


3๏ธโƒฃ ๐Ÿงพ Authority Reporting Hub

Users can generate structured reports to:

  • Cybercrime portals
  • Election Commission
  • RBI / SEBI (financial misinformation)
  • PIB Fact Check

This transforms passive detection into actionable reporting.


4๏ธโƒฃ ๐ŸŽฎ Learn to Verify (Gamified Media Literacy)

Instead of only detecting misinformation, we also:

  • Teach users how misinformation works
  • Provide scenario-based quizzes
  • Offer AI feedback on reasoning

The goal is long-term awareness, not dependency.


๐Ÿ—๏ธ How We Built It

Tech Stack

  • Frontend: React + Tailwind CSS
  • Backend: Node.js / Serverless APIs
  • AI Layer: LLM-based contextual analysis
  • Speech-to-Text: External STT APIs (Deepgram / Whisper alternatives)
  • Deployment: Vercel

Architecture Approach

We designed the system to be:

  • Modular
  • Scalable
  • API-driven
  • Model-agnostic (can switch AI providers)

Each analyzer module functions independently but shares a unified scoring logic.


โš™๏ธ Challenges We Faced

1๏ธโƒฃ Speech-to-Text API Limitations

  • Token limits
  • Quota restrictions
  • Some models restricted to specific cloud environments (e.g., Vertex AI)
  • File upload vs live mic recording inconsistencies

We experimented with:

  • Web Speech API (live only)
  • Deepgram (file issues)
  • Whisper (quota problems)

This taught us how production AI systems must handle:

  • Rate limiting
  • Model fallback
  • API abstraction layers

2๏ธโƒฃ Hallucination Control

LLMs sometimes:

  • Over-assume misinformation
  • Fabricate verification sources

We reduced this by:

  • Structured prompts
  • Constrained outputs
  • Scoring instead of binary labels

3๏ธโƒฃ Multilingual Complexity

Voice notes in regional languages require:

  • Language detection
  • Translation normalization
  • Cultural context awareness

This is still an area for future improvement.


๐Ÿ“š What We Learned

  • AI detection alone is not enough โ€” explanation builds trust.
  • Voice-based misinformation is under-addressed.
  • Model abstraction is important (never depend on one provider).
  • UX clarity matters as much as AI accuracy.
  • Fighting misinformation is as much about education as detection.

๐ŸŒ Future Vision

We aim to expand FactChecker-AI with:

  • Browser extension integration
  • WhatsApp chatbot deployment
  • Real-time misinformation heatmaps
  • Community-based credibility voting
  • Federated misinformation reporting system

๐ŸŽฏ Impact

FactChecker-AI does not just analyze content. It empowers users to:

Think critically. Verify responsibly. Share consciously.

In a world flooded with information, clarity is power.


Built With

Share this project:

Updates