Inspiration
We kept hitting the same problem with AI-assisted research writing: paragraphs looked polished, but citation-grounded facts were often off. Numbers were wrong, claims were overstated, or statements were attributed to sources that did not actually support them.
Manual checking is slow, especially when you need to verify multiple claims inside one paragraph. SourceCheck came from that trust gap. We wanted a tool that checks source-attributed claims against the user-provided source itself and shows a structured verdict instead of a vague quality score.
What it does
SourceCheck verifies a paragraph against one required source URL.
The current flow:
- Paste a paragraph or short passage.
- Provide the source URL (required) and optional citation hint.
- SourceCheck extracts only cited or clearly source-attributed factual claims.
- Each extracted claim is checked against retrieved evidence from that source.
- The app returns claim-level verdicts: confirmed, incorrect, partially_correct, hallucinated_citation, or unverifiable.
- It also returns a conservatively corrected paragraph that only changes grounded errors.
The result is a readable verification report with a summary bar, verdict cards, original vs corrected text, and source context.
How we built it
We built SourceCheck as a full-stack retrieval-grounded system.
- Frontend: React + Vite + Tailwind with a three-stage product flow (hero, checker, results), animated loading states, summary, and verdict cards.
- Backend: FastAPI with two endpoints (
/check-paragraphand compatibility/check) and strict request/response schemas. - Grounding: Nia handles source indexing and source-specific search.
- Reasoning + rewrite: Groq (
llama-3.3-70b-versatile) is used to extract claims, synthesize verdicts from retrieved findings, and rewrite conservatively.
Implementation details that shaped reliability:
- Defensive JSON cleaning/parsing for model responses.
- Retry + timeout handling for Groq and Nia calls.
- Verdict normalization and post-processing safeguards.
- A rewrite stage that keeps original wording unless a grounded correction exists.
Challenges we ran into
- Long-running indexing/search: source indexing and retrieval can take time, so we designed rotating loading messages and clear waiting feedback.
- Structured output brittleness: LLMs can return malformed JSON or noisy fields, so we added strict parsing, normalization, and fallbacks.
- Verdict precision: distinguishing hallucinated citation vs unverifiable required explicit post-processing rules tied to evidence language.
- Rewrite safety: we had to prevent over-editing and ensure corrections are only applied when truly grounded.
Accomplishments that we're proud of
- Built an end-to-end paragraph verifier that is grounded in the provided source URL.
- Shipped claim extraction + per-claim verification + corrected paragraph rewrite in one run.
- Delivered a clear UI that makes verification interpretable: summary counts, confidence labels, explanations, and source context.
- Added robust backend safeguards (retries, polling, normalization) so demo behavior stays stable under real API conditions.
What we learned
- Grounding quality drives trust more than generation quality.
- Reliable AI products need strict schemas, defensive parsing, and explicit fallback behavior.
- Verdict UX matters: users trust the output more when the app clearly separates claim, evidence, and correction.
- Tight frontend-backend contracts made parallel development much faster.
What's next for SourceCheck
- Expand beyond cited-only extraction to support uncited factual claims with transparent confidence controls.
- Add inline sentence-level annotations directly in the original paragraph.
- Support additional inputs: PDF upload, URL-to-text ingestion, and bibliography-assisted checks.
- Improve evidence traceability with richer quote spans and passage-level grounding links.
- Add saved runs and exportable verification reports.
- Hardening for production: stronger rate-limit recovery, monitoring, and lower-latency retrieval paths.
Built With
- fastapi
- groq
- javascript
- nia
- react
- tailwindcss
Log in or sign up for Devpost to join the conversation.