Inspiration
We are drowning in noise. Algorithms prioritize engagement over truth, and "fake news" has evolved from simple text lies to complex visual manipulation designed to trigger rage. We found ourselves doomscrolling, unsure if what we were seeing was real or just "rage-bait."
We realized that existing fact-checkers are too slow, too text-heavy, and often ignored. We didn't just want another summary tool; we wanted a "Cognitive Security Layer", a lens that could instantly strip away the emotional hype and reveal the raw signal underneath. We wanted to build a tool that doesn't just tell you what is true, but helps you understand why you are being manipulated.
What it does
Signal Lens is a multimodal truth engine. It transforms Gemini 3.0 Flash from a chatbot into a real-time media analyst.
Visual Rhetoric Analysis: You can drag & drop a screenshot of a news article or tweet. The AI analyzes the pixels—identifying if the image uses fear-mongering colors or misleading angles—while simultaneously cross-referencing the text.
The "Hype Factor": It assigns a score (0-100%) to the emotional intensity of the content, instantly flagging clickbait.
The "Steel Man" Protocol: Instead of just debunking a claim, it uses Gemini's reasoning to generate the strongest, most intellectual argument against the content. It forces you to step out of your echo chamber.
Logic Failure Detection: It highlights specific logical fallacies (like Ad Hominem or Straw Man attacks) directly in the text.
Public Sentiment Grounding: It uses Google Search Grounding to summarize what the actual public consensus is, rather than just what the article claims.
How we built it
We built the core using Python and Streamlit to create a reactive, "Glassmorphism" dashboard that feels like a futuristic command center.
The brain of the operation is Google Gemini 3.0 Flash. We chose this specific model for its low latency and multimodal capabilities.
System Prompting: We engineered a strict "Logic Protocol" that forces Gemini to output pure JSON, allowing us to render complex bias metrics into visual gauges and charts.
Multimodal Pipeline: We used Pillow to process user screenshots and feed them directly into Gemini’s vision encoder.
RTL Engineering: We built a dynamic layout engine that detects language (e.g., Persian/Arabic) and instantly mirrors the entire UI (Right-to-Left) for global accessibility.
Viral Sharing: We implemented a "Proof Card" generator that creates a downloadable report of the analysis, making the truth shareable.
Challenges we ran into
The "Hallucination" Risk: Early versions would sometimes invent bias where there was none. We solved this by implementing "Factual Anchors", forcing the model to cite specific sentences from the text before assigning a score.
JSON Instability: Getting a Large Language Model to return perfect code-readable JSON every single time is hard. We had to write robust error handlers and "cleaning" functions to sanitize quotes within the JSON to prevent the UI from crashing.
Visual vs. Textual Dissonance: Sometimes an article text was neutral, but the image was biased. Teaching the model to weigh these two conflicting signals to produce a single "Trust Score" took significant prompt tuning.
Accomplishments that we're proud of
The "Steel Man" Engine: Seeing the AI successfully argue against a biased article so well that it actually made us pause and rethink our own assumptions.
Speed: We achieved a near-instant analysis flow. Thanks to Gemini 3.0 Flash, the time from "Paste" to "Full Report" is incredibly fast.
True Inclusivity: Getting the Right-to-Left (RTL) support working perfectly for Persian and Arabic users. Misinformation is a global problem, and we are proud our tool works for non-English speakers natively.
What we learned
We learned that context is everything. A headline might look fake, but when cross-referenced with public sentiment via Grounding, it might be satire or a niche truth. We also learned the power of multimodal inputs, users prefer pasting a screenshot 10x more than copying and pasting text. It feels magical.
What's next for Signal Lens
Browser Extension: We want to build a lightweight extension that overlays the "Trust Score" directly onto your Twitter/X feed, blurring out low-trust content until you choose to reveal it.
Video Analysis: Expanding Gemini’s multimodal capabilities to analyze short video clips (TikTok/Reels) for deepfake audio and visual manipulation.
Gamification: Adding a "Media Literacy Dojo" where users can test their own ability to spot fallacies against the AI.
Log in or sign up for Devpost to join the conversation.