Decision Autopsy
Inspiration
In real teams, decisions are judged almost entirely by outcomes. When something fails, the original context—constraints, assumptions, missing information—gets reconstructed imperfectly, often with hindsight bias.
I wanted to build a system that answers a different question:
Did this decision make sense given what was known at the time it was made?
That required treating decision context itself as a durable, auditable artifact—not as a narrative written after the fact. Decision Autopsy was designed to preserve that context and analyze it safely, without pretending that AI has authority over human judgment.
What it does
Decision Autopsy is an internal decision forensics system.
It:
- Captures raw human decision context (Slack threads, emails, notes)
- Freezes that context immutably at time-T
- Converts it into structured, human-approved data
- Runs a deterministic, two-step AI analysis pipeline
- Produces explainable hypotheses, not verdicts
- Explicitly models uncertainty and confidence
The system is designed for post-mortems and pre-mortems, not recommendations or autonomous decisions.
How I built it
The system is implemented as a bounded, deterministic pipeline:
- Raw Context Capture — User-pasted content is stored immutably and never overwritten.
- Paste-to-Parse (AI-assisted extraction) — A constrained AI step extracts assumptions, constraints, risks, and metrics. This output must be reviewed and approved by a human before it is saved.
- Structured Context Layer — Only human-approved, deterministic JSON enters the analysis pipeline.
- AI Analysis Pipeline (2 calls total)
- Core Analysis: assumption checks, bias hypotheses, missing signals
- Synthesis: counterfactuals, summaries, uncertainty handling
- Schema Validation & Failure Handling — All AI outputs are validated. If parsing or validation fails, the analysis is marked
LOW_CONFIDENCEand the system degrades safely without crashing.
The architecture deliberately prioritizes reliability, auditability, and failure visibility over model sophistication.
Challenges I ran into
- Reconciling probabilistic LLMs with deterministic system behavior
- Handling invalid or partial AI outputs without breaking the demo
- Resisting the urge to turn the system into a chatbot
- Designing confidence scores that signal uncertainty without false authority
- Making AI failures visible rather than silently corrected
Most of the complexity came from building guardrails around the AI, not from the AI itself.
Accomplishments that I'm proud of
- Treating decisions as first-class system artifacts
- Clean separation of raw vs structured context
- A failure-tolerant, demo-safe AI pipeline
- Explicit confidence modeling and uncertainty handling
- A system that could plausibly exist inside a real company
- End-to-end deployment using only free-tier services
What I learned
- System design matters more than model choice
- Without structure, AI output is not auditable
- Human-in-the-loop is not a weakness—it's a requirement
- Most AI failures happen at integration boundaries, not inference
- Reliability and clarity beat "smart" behavior in real tools
What's next for Decision Autopsy
Potential future work includes:
- Authentication and role-based access
- Versioned decision histories
- Background analysis jobs
- External integrations (Slack, email)
- More granular confidence diagnostics
- Team-level decision pattern analysis
These are incremental extensions; the core system design is already complete.
Built With
- api
- express.js
- javascript
- neondb
- node.js
- postgresql
- prisma
- react
- render
- rest
- tailwindcss
- typescript
- vercel
- vite
Log in or sign up for Devpost to join the conversation.