Inspiration

In an era of rapid-fire information, "truth" has become harder to verify. We noticed that most fact-checkers either provide a simple "True/False" without context or are too academic for everyday use. We wanted to create Meridian: a bridge between raw news and peer-reviewed reality, giving users a high-level bias analysis and a deep dive into academic literature simultaneously.

What it does

Meridian is a dual-layered verification engine:

Live Fact-Checker: Uses Llama's qwen2.5:3b model to break down articles into individual claims, assigning transparency scores and identifying logical fallacies.

Scholar Integration: For every claim, Meridian queries Google Scholar to find supporting or contradicting peer-reviewed evidence, visualizing the "Article Bias" and "Credibility" through an intuitive split-pane interface.

Media Bias Mapping: It identifies the political leaning of a source, helping users recognize the "echo chamber" they might be in.

How we built it

Frontend: A clean, minimal UI built with HTML5, CSS3, and JavaScript, focusing on scannability and professional data visualization.

Backend: A FastAPI (Python) server that orchestrates data between multiple APIs.

Intelligence: Gemini 2.0 Flash powers the claim extraction and the batch-processing of academic snippets.

Search: SerpApi (Google Scholar engine) provides real-time access to the world’s research database.

Challenges we ran into

The biggest hurdle was the "429 Quota Exceeded" wall. Initially, our system attempted to verify each academic snippet individually, which overwhelmed the Gemini API limits. We had to architect a Batch Processing system in the backend that combined multiple data points into a single "Data Extraction" prompt. Unfortunately, we were unable to and had to go with a simpler model that could be downloaded from Llama. We also battled "Template Literalization," where the AI would repeat our instructions instead of processing the data—a challenge we solved through strict role-prompting and regex-based JSON parsing.

Accomplishments that we're proud of

The Split-Pane Scholar Card: We successfully translated a hand-drawn wireframe into a functional, responsive UI component that visualizes citation scores and research bias at a glance.

API Efficiency: Optimizing our backend to handle complex research queries while staying within the constraints of free-tier quotas.

Real-Time Speed: Creating a system that can scan an entire article and cross-reference it with Google Scholar in seconds.

What we learned

We learned the importance of prompt engineering for data extraction. Moving from "chatting" with an AI to using it as a structured data processor requires a shift in mindset—treating the model as a strict logic engine rather than a conversationalist. We also gained deep experience in debugging asynchronous Python backends and handling CORS issues in local development.

What's next for Meridian

Browser Extension: Bringing Meridian directly to the user's browser to fact-check news as they read it.

Source Reliability Index: Developing a proprietary metric that tracks a news outlet's accuracy over time.

Collaborative Verification: Allowing researchers to "vote" on the AI's interpretations to create a human-in-the-loop verification system.

Built With

Share this project:

Updates