TruthLens Project Story
Built in Dubai, during a period of real regional uncertainty.
Inspiration
What made us come up with TruthLens was reflecting on the information environment we are living through right now.
Our team is based in Dubai, and as the conflict involving the United States, Israel, and Iran has intensified across the region, we have experienced first-hand what it feels like to live inside a fast-moving, high-uncertainty information environment. In moments like this, information spreads faster than clarity. Headlines, forwarded messages, breaking updates, rumors, and emotionally charged posts flood social media and group chats before people have time to stop and think.
Have you ever seen a headline, a viral post, or a forwarded message and immediately felt unsure whether to trust it?
That uncertainty has become a normal part of modern life, but during regional conflict it becomes even more serious. The problem is not only that misinformation exists. It is that misinformation often looks convincing. Sometimes it is not fully false. Sometimes it takes real facts and reshapes them through weak sourcing, emotional language, selective framing, or missing context.
As students, we kept noticing the same issue: people do not need another tool that simply says “true” or “false.” They need something that helps them understand why a claim should be trusted, questioned, or verified further.
TruthLens was built to bridge that gap. We wanted to create a tool that supports critical thinking in real time, especially when events are unfolding quickly and people are most vulnerable to panic, confusion, and manipulation.
What it does
TruthLens is an AI-powered credibility analysis platform that helps users evaluate:
- articles
- headlines
- social media posts
- fast-moving claims tied to current events
A user pastes in a piece of content, and TruthLens analyses it for:
- weak sourcing
- emotional manipulation
- biased framing
- missing context
- exaggerated or misleading claims
Rather than acting like a black-box fact-checker, TruthLens explains its reasoning. It extracts the main claims, highlights warning signs, assesses source quality, identifies context gaps, and gives users practical next steps on what to verify.
The goal is not just to label content. The goal is to help people think more critically about the information they consume before they believe it or share it.
How we built it
We built TruthLens as a modern web application using a full-stack JavaScript + TypeScript workflow.
Frontend
On the frontend, we created an interface where users can submit content and clearly understand the analysis results. Since the core value of TruthLens depends on trust and explainability, we focused on structuring the output in a way that feels readable, calm, and usable rather than overwhelming.
Backend
On the backend, we built a centralized analysis pipeline that takes in the submitted text, constructs a structured prompt, and passes it through our AI model stack.
Our architecture uses:
- Gemini as the primary model
- Llama 3.1 8B via Azure AI Foundry as a fallback layer
- normalization logic so that different model paths still return a consistent result structure
We designed the system so it does not simply return a verdict. Instead, it returns:
- a credibility summary
- the main claims being made
- key warning flags
- source notes
- context notes
- suggested next checks
One of the most important parts of the build was recognizing that misinformation analysis is much stronger when it is grounded in the current information environment. That pushed us to think beyond static model knowledge and toward a system that can better reason about what is happening in the world right now.
Challenges we ran into
One of our biggest challenges was that misinformation is rarely simple.
It would have been much easier to build a tool that classified everything as either true or false, but that would have missed the real problem. A lot of misleading content is not completely fabricated. It may be based on something real, but framed in a distorted, exaggerated, or emotionally manipulative way. Designing a system that could capture that nuance was much harder.
Another challenge was balancing flexibility with structure. We needed the AI outputs to feel intelligent and context-aware, but we also needed them to fit a clean, repeatable format that users could understand quickly. That meant spending a lot of time refining:
- prompt design
- output formatting
- fallback logic
A major challenge was current-world awareness. Since TruthLens is designed for fast-moving events, relying on model memory alone is not enough. When the surrounding context changes quickly, a model can sound confident without actually being grounded in what is happening right now. Solving that problem became central to the project.
Finally, there was the challenge of credibility itself. If you are building a tool about trust, the product must also feel trustworthy. That meant being careful with tone, resisting overconfidence, and making sure the system explains uncertainty rather than hiding it.
Accomplishments that we're proud of
We are proud that TruthLens goes beyond surface-level fake news detection.
Instead of reducing credibility to a single label, we built a system that explains why content may be misleading. That makes the product feel more thoughtful, more educational, and more useful in the real world.
We are also proud of the structure of the analysis itself. The fact that the platform can break down content into:
- claims
- warning flags
- source notes
- context issues
- next steps
makes it feel like a real decision-support tool rather than just another chatbot.
Another accomplishment we are proud of is the relevance of the idea. TruthLens addresses a problem that affects almost everyone, but it felt especially urgent to us because we were building it while living in a region directly affected by fast-moving geopolitical events and information overload.
What we learned
This project taught us that building with AI is not just about getting an answer. It is about designing a system that people can trust.
We learned a lot about:
- prompt engineering
- structured AI outputs
- fallback architecture
- consistency across model responses
- the importance of explainability
We also learned that credibility is not binary. Good analysis often means being comfortable with uncertainty, nuance, and incomplete information. In many cases, the most responsible output is not a confident conclusion, but a careful explanation of:
- what is known
- what is unclear
- what should be checked next
Most importantly, we learned that technology can play a meaningful role in helping people navigate a more confusing digital world, but only when it is designed with care and grounded in the real contexts people are living through.
What's next for TruthLens
The next step for TruthLens is to make the platform more grounded, more transparent, and more polished.
We want to improve real-time web grounding so the system can analyse claims against current sources and evolving world events much more accurately. We also want to strengthen source transparency so users can see not just the analysis, but the evidence and citations behind it.
From a product perspective, we want to build a much more refined frontend experience, including:
- a polished landing page
- better analysis visualization
- example-driven onboarding
- clearer source and citation display
In the longer term, we see TruthLens growing into a broader media-literacy and trust platform, one that can help students, families, educators, and everyday internet users engage with online information more critically and more confidently.
TruthLens started as a hackathon project, but we believe it has the potential to become something much bigger: a tool that helps people slow down, think clearly, and make better judgments in a world flooded with information.
Log in or sign up for Devpost to join the conversation.