Inspiration
As university students taking classes in an international environment, we often struggle with unclear lectures, fast-paced explanations, and complex topics delivered in varying levels of English. Our notes would be messy, incomplete, or factually uncertain and verifying every detail became a time-consuming task we kept putting off.
We asked ourselves:
“What if notes could correct themselves?”
That question became the foundation of TruthLens which is a tool built for students like us who want to learn faster, more accurately, and with more confidence.
What it does
TruthLens is an AI-powered study workspace that analyzes your notes sentence-by-sentence, flags inaccuracies, explains why they’re incorrect, and proposes corrected versions backed with credible sources.
With one click, you can apply these corrections directly into your document, creating a polished, fact-checked version of your notes.
In short:
TruthLens turns raw notes into reliable knowledge.
How we built it
Frontend:
Built with Next.js 16, React 19, TypeScript, Tailwind, and shadcn/ui for a modern, clean interface.
Backend:
Django REST Framework powers user auth, document storage, sentence parsing, and correction workflows.
AI Engine:
We integrated Ollama to run a local LLM with a custom fact-checking prompt.
The model:
- Evaluates each sentence
- Labels it as true/false
- Proposes a correction
- Explains its reasoning
- Provides source links
Data Storage:
PostgreSQL stores users, documents, sentences, and corrections.
Smart Correction System:
When a user clicks “Apply Correction,” the backend rewrites only the relevant sentence and regenerates updated sentence boundaries automatically.
Dev Environment:
Fully containerized with Docker for easy setup and cross-platform reliability.
Challenges we ran into
- Sentence Matching: Ensuring the AI’s returned sentence exactly matched the extracted database version required multiple iterations.
- Applying Partial Corrections: Replacing only one sentence inside a larger document while keeping indexes accurate ; it was unexpectedly complex.
- LLM Output Parsing: LLMs often return malformed JSON, so we built sanitization and validation layers.
- Real-Time UX: Coordinating loading states, highlights, and correction previews required careful UI design.
- Local LLM Performance: Running fact-checking models locally with Ollama needed optimization to keep analysis smooth. It also needed RAM, so only one of us could test it.
Accomplishments we’re proud of
- Built a fully working AI fact-checking system , not only a prototype.
- Integrated a local LLM with structured, predictable outputs.
- Designed a polished UI showing flagged sentences, explanations, sources, and instant corrections.
- Created a robust correction workflow with proper sentence rewriting and regeneration.
- Wrote strong GitBook documentation.
- Overcame heavy backend/AI parsing bugs and shipped something reliable.
What we learned
- How to build AI-powered features that feel useful, not gimmicky.
- How to integrate LLMs safely and predictably. As well as for free, didn't spend a cent on AI.
- The importance of clean prompt engineering and output validation.
- How to handle document segmentation, dynamic rewriting, and sentence index recalculation.
- How to coordinate frontend–backend–AI workflows effectively.
- That small UX touches dramatically improve the user experience of AI features.
What’s next for TruthLens
- Browser extension to fact-check text anywhere
- Collaboration mode for study groups
- Export to PDF/Markdown with citations
- AI-generated summaries & quizzes
- Learning analytics showing accuracy trends over time
- Mobile apps for study-on-the-go
- Plug-ins for Notion, Obsidian, Google Docs
Our goal is to turn TruthLens into a full AI-powered learning ecosystem if possible
Built With
- django
- docker
- huggingface
- nextjs
- ollama
- postgresql
- python
- react
- tailwindcss
- typescript
Log in or sign up for Devpost to join the conversation.