Inspiration

Working across finance and consulting, I kept seeing the same problem: the smartest people in the room were spending most of their time on the least valuable work. A single M&A deal requires hundreds of hours of due diligence: analysts buried in PDFs, manually pulling figures, rebuilding models from scratch. It's expensive, slow, and scales poorly.

The insight wasn't that AI could help with this work. It was that the entire workflow, from document ingestion to final report, followed a repeatable, structured logic. If a process is repeatable, it can be automated. I wanted to build the system that eliminates that bottleneck entirely.


What it does

TAM automates the full financial due diligence workflow. Feed it a set of financial documents ( 10-Ks, CIMs, pitch decks ) and it returns a structured, client-ready report in minutes. It extracts key metrics, runs standard analyses, flags anomalies, benchmarks against comps, and generates narrative sections — end to end, without an analyst in the loop.

Core metrics computed automatically include:

$$\text{EBITDA Margin} = \frac{\text{EBITDA}}{\text{Revenue}} \times 100$$

$$\text{Debt-to-Equity} = \frac{\text{Total Debt}}{\text{Shareholders' Equity}}$$

$$\text{Current Ratio} = \frac{\text{Current Assets}}{\text{Current Liabilities}}$$

$$\text{Revenue CAGR} = \left(\frac{\text{Revenue}_n}{\text{Revenue}_0}\right)^{\frac{1}{n}} - 1$$


How we built it

TAM is built as a modular agentic pipeline — discrete agents handling ingestion, extraction, analysis, and report generation independently. This lets each layer be upgraded without breaking the rest.

The pipeline follows four stages:

  1. Document Ingestion — parsing 10-Ks, CIMs, pitch decks, and financial statements across formats
  2. Entity & Metric Extraction — pulling key financials, KPIs, and risk factors with source citations
  3. Analysis Layer — running comps, flagging anomalies, and computing standardized ratios
  4. Report Generation — assembling findings into structures firms already recognize, with no learning curve on the receiving end

Challenges we ran into

Document heterogeneity was the hardest technical problem. Financial documents come in every format: scanned PDFs, inconsistent table structures, non-standard accounting treatments. Building extraction that generalizes reliably required significant iteration.

Beyond the tech, trust was the real challenge. Firms don't let automated systems touch live deals without auditability. Every figure in a TAM report traces back to its source document. That wasn't a feature added late, it was a core design constraint from day one.


Accomplishments that we're proud of

Reducing a process that typically takes weeks to a matter of minutes without sacrificing the structure or credibility that deal teams expect. Building a system where every output is fully auditable makes it viable for real enterprise use, not just a demo.


What we learned

Reliability matters more than capability in financial contexts. A hallucinated number isn't a minor bug, but a liability. Hard constraints matter: source-tied citations, confidence thresholds, and human checkpoints at critical stages.

We also learned that enterprise adoption isn't won on features, it's won on workflow fit. Firms want their existing process, faster. TAM meets them there.


What's next for TAM

Deeper integration with the tools that deal teams already use, expanded coverage across deal types beyond M&A, and building toward a model where TAM doesn't just produce the report but flags the risks your team would have missed.

Built With

Share this project:

Updates