Atlas AI— An Organizational Research & Reasoning Agent

Inspiration

Organizations rely on large internal knowledge bases, yet critical decisions are often made on contradictory or weak information.
Most AI tools retrieve answers — they don’t question whether those answers are actually reliable.
Atlas AI was inspired by the need for an AI that can audit what an organization believes before it acts on it.


What I Built

Atlas AI is a research and reasoning agent that works inside a constrained organizational knowledge base.
It detects:

  • Contradictions between documents
  • Claims with weak or missing evidence
  • Areas of high uncertainty

Instead of just summarizing content, Atlas AI evaluates how trustworthy each claim really is.

Each claim (c) is scored using retrieved supporting and conflicting evidence:

[ \text{Confidence}(c)=\frac{|E_{\text{support}}|}{|E_{\text{support}}|+|E_{\text{conflict}}|+\lambda} ]

This makes hidden risks inside organizational knowledge visible.


How I Built It

Atlas AI uses a RAG + reasoning pipeline:

  1. Documents are embedded into a vector store.
  2. The system extracts claims from the documents.
  3. For each claim, it retrieves supporting and conflicting evidence.
  4. A reasoning layer compares them and assigns a confidence score.

The result is a system that behaves like a research analyst, not a chatbot.


What I Learned

LLMs are powerful, but without explicit evidence checking and uncertainty modeling, they produce overconfident answers.
Atlas AI showed me that structure and verification matter more than raw model size.


Challenges

  • Getting the model to admit uncertainty instead of hallucinating confidence
  • Detecting semantic contradictions across differently worded documents
  • Ensuring all reasoning stayed within the constrained knowledge base

Atlas AI turns organizational knowledge into something that is not just searchable — but self-auditing and trustworthy.

Built With

Share this project:

Updates

posted an update

AXIOM — Dev Update

Hi guys , Just pushed a major internal upgrade.

I set up a structured agent workflow using LangChain + LangGraph, where AXIOM moves through clear agent states at each stage (ingest → retrieve → reason → verify → score).
This makes the system deterministic and debuggable instead of a single black-box prompt.

For knowledge ingestion:

  • Firecrawl is used to pull content directly from website URLs
  • PDFs and documents are embedded and stored in Pinecone
  • A RAG pipeline retrieves only relevant evidence during reasoning

AXIOM can now reason strictly inside a controlled knowledge base while checking for contradictions and weak evidence.

Next: would be improving evidence tracing and visualizing how conclusions are formed.

Log in or sign up for Devpost to join the conversation.