Inspiration
I used to think AI hallucinations stemmed from poor data or tuning issues. But a transformative insight shifted everything: hallucinations emerge from topological failures - specifically, the loss of torsion and the erasure of discrete logical boundaries.
Reading "Torsion in Persistent Homology and Neural Networks" (Walch, 2025), I realized how modern models, through operations like dimensionality reduction and activation shifts, flatten out the very mathematical structure that stabilizes meaning. Inspired by ideas from topology, cohomology, and gauge theory, I set out to build a hallucination detector that sees language as structure, not just text.
What it does
Veritas is a real-time hallucination sentinel that analyzes each AI-generated sentence through multiple rigorous lenses:
- Topological Drift using persistent homology
- Semantic Torsion in concept space
- Temporal Logic Validation of facts and events
- Web-Factual Cross-Verification with source attribution
The result: a probabilistic, explainable hallucination score that flags factual, logical, and structural violations with mathematical clarity.
How we built it
I combined:
- Semantic graph construction + simplicial complex generation
- Persistent homology algorithms to detect topological anomalies
- Geodesic distance for entity coherence checks
- Bayesian validation pipelines for web-verifiable facts
- A lightweight React/Next.js frontend with a GPT-core backend for real-time sentence parsing Each sentence is evaluated both geometrically and factually. No shortcuts.
Challenges we ran into
Most NLP models assume smooth vector spaces, but Veritas needed to simulate discrete topological boundaries. Translating concepts like torsion and sheaf cohomology into working code demanded not just math, but new ways of thinking about language.
Integrating live fact-checking with deep geometry wasn’t trivial either - every architecture choice had to balance precision and performance.
Accomplishments that we're proud of
- Built the first hallucination tracker grounded in algebraic topology and semantic cohomology
- Created explainable outputs with specific violation breakdowns
- Made abstract mathematics actionable in real-time analysis
- Developed a novel scoring system that blends entropy, topology, and truth
What we learned
- Torsion matters - it's the missing stabilizer in current transformer models.
- Current AI interpolates too much - without topological barriers, any concept can bleed into any other.
- Attention (closed 1-forms) and MLPs (sheaf cohomology) must be fused with torsion-aware design for logical integrity.
- Most of all, I learned that real intelligence isn’t just about prediction: it's about structure.
What's next for Veritas
- Building a plug-in for AI developers to visualize hallucination risk in production
- Extending our framework to video and multimodal hallucinations
Veritas is just the beginning. The future of trustworthy AI must be topologically aware - only then can it reason, not just predict.
Built With
- bolt
- gpt-4
- openai
Log in or sign up for Devpost to join the conversation.