Inspiration Supply chains break slowly and then all at once. The Clorox cyberattack cost $356 million. The Hershey Halloween shortage was visible in cocoa futures and Reddit complaints weeks before the CEO said anything on an earnings call. The signals were there. They just weren't in one place. We kept asking the same question: if the data is already public, why does the market only react after the press release? The answer isn't that the signals are hidden. It's that nobody was combining them fast enough to matter. That's the gap we wanted to close.

What it does The Pre-Mortem Machine monitors 20 major CPG companies across six live data streams — FDA recall filings, Wikipedia edit frequency, FRED macro data, Adzuna job posting velocity, SEC 8-K keyword density, and Google Trends search behaviour. Every signal is normalised per company against its own historical baseline and weighted into a composite fragility score from 0 to 10. When a score crosses the critical threshold, the system generates a Preliminary Post-Mortem Report: cause of failure, confidence percentage, estimated time-of-death range. The framing is deliberate. A post-mortem is what you run after someone dies. We run ours before. Two other things the system does that we think are genuinely novel. It ranks companies by Canary Score — how many days before a sector-wide stress event their signal has historically fired first. And it traces a Blame Chain: when one company fractures, it identifies which companies share upstream suppliers and flags the next victim as NOT YET PRICED IN.

How we built it Python FastAPI backend with six independent collectors running in parallel. Each collector hits a free public API — openFDA, MediaWiki, FRED, Adzuna, SEC EDGAR, pytrends — normalises its output to a float between 0 and 1, and writes to a shared JSON store. The scoring engine reads from that store on every request and recomputes the composite score in real time. The frontend is React and Vite, styled in the Nothing OS design language — pure black, Roboto Mono, dot matrix status indicators. It polls the backend every 60 seconds on a delta endpoint that only returns companies whose score changed by 0.3 or more, which keeps the UI updates meaningful rather than noisy. The backtester runs against 20 real historical events sourced from SEC filings, FDA enforcement reports, and earnings call transcripts spanning 2021 to 2024. We hand-sourced and verified every event. We started at the beginning of the day and had everything running by the time we submitted.

Challenges we ran into Reddit was supposed to be our consumer signal. We had the collector built and tested. Then we discovered that Reddit's self-service API access was restricted in November 2025, two months before the hackathon. We pulled the signal entirely and replaced it with Google Trends on the day, which turned out to be a cleaner signal anyway — but losing three hours to that discovery was painful. The backtest cache was recomputing on every API request because a single constant was set to zero instead of 24. The card in the UI was showing 0, 0%, 0 for most of the session. Debugging that while the rest of the system was still being built was not fun. The supplier graph powering the Blame Chain is static. We wanted it to be dynamic, built from 10-K disclosures. We did not have time. It works, it's just not something we'd ship as-is.

Accomplishments that we're proud of The historical backtester. We didn't want to show synthetic validation and call it done. So we went back through four years of public records — SEC filings, FDA enforcement notices, USDA recall databases, earnings transcripts — and built a dataset of 20 real CPG supply chain events with verified dates and documented outcomes. The composite score fired an average of 19 days before public disclosure across those events, with 71% detection accuracy. Those numbers are real and sourced. The forensic framing also landed better than we expected. Calling the output a Preliminary Post-Mortem Report instead of a dashboard score changed how people read it. It forces specificity — every claim has to trace back to a named data source.

What we learned Framing matters as much as the underlying system. The same score displayed as a number on a dashboard versus as a forensic dossier with a case number, a filing timestamp, and a cause-of-failure evidence list feels completely different, even though the data is identical. We also learned that removing Reddit from the signal stack actually made the system more defensible. Every remaining signal is a free, publicly documented government or open-access API with no rate limiting concerns and no authentication fragility. That's a better story to tell than one that depends on a platform that can change its API policy overnight. And honestly — hand-sourcing the historical validation dataset the hard way, going through actual SEC filings and FDA notices rather than generating plausible-sounding events, was worth the time. The numbers feel different when you know where they came from.

What's next for Pre-Mortem Machine The static supplier graph needs to become dynamic. 10-K filings disclose supplier dependencies in machine-readable form and third-party providers like Resilinc maintain structured supplier relationship databases. That's the next engineering priority. Beyond that, the Canary ranking gets more accurate with more historical data. Right now it's seeded from 20 validated events. At 200 events it starts to be genuinely predictive rather than indicative. The commercial path is two tracks running in parallel. CPG procurement teams pay for early warning to reorder from alternative suppliers before a shortage materialises. Long-short equity funds pay for the fragility score as an alternative data feed. The Corvera integration is a specific conversation we want to have — their stockout detection works at the shelf level, ours works at the company level three weeks earlier. Those are complementary, not competing.

Built With

  • adzuna-api
  • fastapi
  • fred-api
  • mediawiki-api
  • openfda-api
  • python
  • pytrends
  • react
  • recharts
  • roboto-mono
  • sec-edgar
  • vite
Share this project:

Updates