About TruthLens
What inspired us
We were deeply moved by the scale and sophistication of misinformation campaigns—from deepfake videos to text and image manipulation—that not only distort public opinion but erode trust in society. Reviewing past hackathon winners like Team Unicron’s “Anvesha”, which tackled text, image, and video misinformation, and Team Alchemist’s “VeriStream”, which leveraged LangChain-based NLP, dynamic knowledge graphs, GIS data, and Explainable AI, we realized that a truly impactful solution must go beyond detection—it must explain, educate, and empower. :contentReference[oaicite:0]{index=0}
What we set out to learn
We aimed to:
- Deploy explainable AI that doesn’t just flag misinformation but shows why something is likely false.
- Integrate community-driven validation like Team Bug Smashers did via GPS-based SMS crowd–source validation. :contentReference[oaicite:1]{index=1}
- Offer multi-modality detection, handling text, images, video, and audio—building on the strengths of previous winners.
How we built it
We adopted an Agile, iterative prototyping model inspired by Team Butterflies, who built their winning solution around SAS Viya, Copilot, and real-time dashboards. :contentReference[oaicite:2]{index=2}
- Prototype 1: A Chrome extension that detects deepfake video and manipulated images using a Vision Transformer + LLM combo.
- Prototype 2: Backend APIs for cross-modality fact validation—extracting statements, querying trusted knowledge bases, and returning credibility scores with scanned source links.
- Prototype 3: A mobile/web interface that enriches AI flags with explainable visual cues (e.g., "mismatched facial landmarks," "text incongruence," "source discrepancy") and invites user votes to confirm/contest.
Challenges we faced
- Achieving real-time detection across modalities while maintaining accuracy and low latency.
- Collecting a reliable and diverse dataset of verified misinformation vs. legitimate content.
- Crafting clear, trustworthy explanations that are technically accurate yet understandable to non-experts.
- Designing a gamified community validation layer that balances speed, trust, and spam resistance.
The impact
- Acts as a truth-scanning lens for social media and live content.
- Empowers users with transparent insights, enabling better-informed decisions.
- Builds a hybrid trust pipeline: AI first, community validation next, with full traceability.
Built With
- docker
- fastapi
- github
- javascript
- jupyter-notebook
- numpy
- pandas
- python
- pytorch
- react
- roberta
- scikit-learn
- tailwind
- typescript
Log in or sign up for Devpost to join the conversation.