Inspiration

At 2am during the Iran–US conflict, I noticed something unsettling.

One outlet reported that Iran denied the attack.
At the exact same moment, another reported that Iran claimed full responsibility.

Both were credible. Both were spreading fast.
Neither acknowledged the contradiction.

Nobody was lying — but someone was wrong.

By morning, millions had already formed opinions based on whichever version they saw first.

That moment exposed a critical gap:
the time between narrative formation and verification.

Sentinel was built to close that gap.


What it does

Sentinel is a real-time conflict misinformation early warning system.

It monitors 20 global news sources, detects contradictory narratives, and delivers a structured intelligence report in under 30 seconds.

Instead of deciding what is true or false, Sentinel shows:

  • where facts conflict
  • which narrative is gaining traction
  • what needs immediate verification

Users get:

  • Misinformation risk score (0–100)
  • Detected narratives with source attribution
  • Direct contradictions between outlets
  • Narrative propagation insights
  • Verification recommendations

Live Demo


How we built it

Sentinel is designed as a real-time AI pipeline:

  • Data ingestion: Apify collects articles from 20 global sources every 30 minutes
  • Semantic encoding: sentence-transformers convert articles into 384-dimensional vectors
  • Vector search: Qdrant retrieves relevant articles using cosine similarity
  • AI analysis: Mistral-7B detects contradictions, extracts narratives, and assigns risk scores
  • Backend: FastAPI orchestrates the system
  • Frontend: React + D3.js visualizes narrative propagation

This is not keyword search — it is meaning-based analysis across sources in real time.


Challenges we ran into

  • RSS feeds failed with traditional scrapers because they are XML, not HTML
  • A Windows-only dependency broke the Linux deployment
  • Qdrant client changes caused silent and misleading errors
  • An exposed API key required immediate security fixes

These challenges highlighted that building reliable real-time systems is harder than building models.


Accomplishments that we're proud of

  • Achieved under 30 seconds from query to full intelligence report
  • Detected real contradictions in Iran conflict coverage
  • Produced a 75/100 HIGH risk score aligned with real-world narrative uncertainty
  • Built and deployed using zero-cost infrastructure

Most importantly, Sentinel identified conflicting narratives before they became widely recognized.


What we learned

  • Real-time systems depend more on data quality than model complexity
  • Semantic search enables cross-source understanding, not just retrieval
  • Detecting disagreement is more reliable than declaring truth
  • AI should support human judgment, not replace it

The key insight:
The real opportunity is early detection of narrative divergence.


What's next for Sentinel — Conflict Misinformation Early Warning System

  • Integrate social media sources (Twitter, Reddit)
  • Real-time alerts when risk reaches critical levels
  • Browser extension showing live risk scores on articles
  • Expansion into elections, public health, and crisis monitoring
  • For trading terminals High-volume API

Goal:
Make misinformation visible before it spreads — not after it’s too late.

Built With

  • ai
  • apify
  • featherless
  • huggingface
  • lovable
  • mistral-7b
  • python
  • qdrant
  • rag
  • react
  • sentence-transformers
Share this project:

Updates