Inspiration

Evidentia was inspired by how hard it is to evaluate whether an academic paper is actually well-supported. Papers often look polished, but weak citations, outdated sources, broad claims, and missing counterarguments can still undermine the work. We wanted to build an autonomous agent that acts like a skeptical research reviewer before submission.

What it does

Evidentia analyzes an academic draft or paper and produces a structured evidence report. It checks citation quality, flags stale or weak sources, evaluates claim coverage, surfaces counterarguments from related literature, compares methods against field norms, and generates a final verdict such as Ready to submit or Needs major evidence work.

How we built it

We built Evidentia with a vanilla HTML/CSS/JavaScript frontend and a FastAPI backend. Users upload a PDF or paste text, then the backend parses the draft, extracts citations, runs agent checks, and returns a structured AnalysisResult.

The agent workflow checks source quality, finds counterarguments, scores claim coverage, and evaluates data/methods quality:

report = citations + claims + counterarguments + scores

The biggest challenge was merging multiple agent outputs into one stable schema the frontend could render reliably. We solved it with shared JSON contracts, mock fixtures, and a report UI that handles partial results gracefully.

Challenges we ran into

Our biggest challenge was turning several agent outputs into one reliable report. Citation checks, counterarguments, grading, and data quality all produced different kinds of information, so we needed a shared JSON schema that the backend and frontend could trust.

We also had to keep the demo reliable while live integrations were still changing. To solve that, we built mock and synthetic fixtures, graceful fallbacks, saved reports, and a frontend that can still render useful results from partial data.

Accomplishments that we're proud of

We built a full end-to-end academic review workflow in a short hackathon window: upload or paste a draft, run an agent-style analysis, show progress, render a structured report, save past reports, and export results as a PDF.

We’re especially proud of the report experience. Evidentia does not just return raw scores; it explains citation quality, claim coverage, counterarguments, methods gaps, and concrete next steps in a way an author can act on.

What we learned

We learned that agent output is only useful if it is structured, explainable, and easy to act on. Scores alone are not enough; authors need to know which citations are weak, which claims lack support, what literature disagrees, and what to fix next.

We also learned how important shared contracts are in a multi-person build. Defining one final JSON schema let the backend, agents, report builder, and frontend move in parallel without breaking each other.

What's next for Evidentia

Next, we want to make Evidentia work on more real papers end-to-end with stronger live academic search, better citation replacement suggestions, and deeper reviewer-style feedback.

We also want to improve agent memory so Evidentia can track revisions across drafts, compare versions, and show whether the author actually fixed weak evidence, unsupported claims, or methods gaps over time.

Built With

Share this project:

Updates