Inspiration

I was reading a Reddit thread where someone posted their Amazon termination letter. It said their "time-off-task percentage exceeded the acceptable threshold" and their account had been "flagged by the system." They were asking if they had any options. The replies were mostly "no, nothing you can do."

That bothered me. I started digging.

88% of large companies now use AI to make or influence firing decisions. Amazon fires warehouse workers automatically when productivity drops 0.1% below a number. IBM laid off thousands using an AI skills-gap model nobody could audit. KPMG tracks keystrokes. Uber deactivates gig workers with zero human ever reading the case.

And here's the thing — the legal rights to fight this already exist. GDPR Article 22 has been enforceable since 2018. The EU AI Act passed in 2025 classifies algorithmic HR tools as high-risk AI requiring transparency. NYC Local Law 144 mandates annual bias audits of hiring AI. California CRC 2025. Colorado AI Act effective June 2026. The laws exist. The rights exist. But zero tools help a regular person use them.

I searched GitHub. Zero repos for "algorithmic termination appeal." Zero for "AI firing detection worker." Zero for "GDPR Article 22 generator." Completely empty space.

That's what I built.

What it does

You upload your termination letter — PDF, DOCX, image, or paste text. Six AI agents run on it and in under 45 seconds you get four things:

  1. Automated decision detection (F-01 + F-02) Scans for 14+ tell-tale phrases used specifically by Workday, SAP SuccessFactors, HireVue, Eightfold AI, Amazon UFM, and gig platform systems — things like "performance score below threshold," "system flagged your account," "time off task exceeded." Outputs a confidence score: "87% probability this was an automated decision." This is forensic AI applied to employment law. Nobody has built this for workers before.

  2. Bias pattern analysis (F-03) Cross-references the letter against 7 illegal bias patterns: age proxies like "resistant to change" and "low digital fluency," disability proxies like "excessive absences" and "inconsistent performance," race and language proxies like "communication clarity score" and "accent clarity rating." Every match is flagged with the specific law it violates, cross-referenced against 2024–2025 EEOC case filings.

  3. AI vendor identification (F-10) Identifies which HR software fired you from phrase patterns alone Workday, HireVue, Eightfold, SAP, Amazon UFM and pulls their known bias lawsuits and audit status. Example output: "Workday ATS detected (93% confidence) 4 known bias lawsuits, no public audit available." Workers never know which system fired them. Lawyers don't know either. GhostWorker figures it out from the language.

  4. A ready-to-send legal appeal letter (F-04) A 4-step LangGraph agent pipeline: identifies the two strongest legal grounds → selects applicable laws by jurisdiction → drafts a formal letter with specific statute citations → adds what the company was legally required to do and failed to do. Full letter downloads as PDF. For EU: GDPR Article 22 + EU AI Act Article 6. For US: ADEA, Title VII, ADA, NYC Local Law 144, California CRC 2025. For UK: UK GDPR + Equality Act 2010. A lawyer charges $300–500 to write this. GhostWorker does it in 45 seconds for free.

  5. Evidence preservation checklist (F-07) Runs the instant a document is uploaded before anything else and generates a 12-item timestamped checklist: which emails to save, which dashboards to screenshot, which messages to download. Shows a live 24-hour countdown timer. Account access is genuinely revoked within 24 hours in most enterprise systems. This feature exists to solve that one brutal real-world problem. No other tool has ever addressed it.

Beyond the core four:

  • Surveillance Metric Invalidator (F-05)— if the letter mentions keystrokes, idle time, or Slack response speed as metrics, generates a scientific rebuttal with peer-reviewed citations showing these have near-zero correlation with actual job performance. The worker gets a document that says "the metrics used to evaluate me have been shown in peer-reviewed research to not predict job performance" with citations. They'd never find this themselves.

  • EU AI Act + US Compliance Checker (F-06) — standalone check for EU AI Act Article 6 violations (HR AI = high-risk, fines up to €20M), NYC Local Law 144, California CRC 2025, Colorado AI Act. Auto-generates a pre-filled regulatory complaint the worker can submit directly to the DPA, EEOC, or NYC DCWP.

  • GDPR Article 22 Auto-Request (F-11)*— for EU/UK workers, generates a formal Subject Access Request demanding the company disclose the algorithm's logic and the data it used. Company is legally required to respond in 30 days. If they don't: regulatory violation. If they do: they've disclosed their AI system in writing. Either outcome is a win. Most workers and most lawyers have never heard of this right.

  • PIP Survival Agent (F-09) — for workers currently on a Performance Improvement Plan, not yet fired. Drafts weekly counter-documentation, flags when managers set impossible targets, flags inconsistent PIP application, and builds a legal record week by week. Every other tool is reactive it helps after you're fired. This is the only proactive one.

  • Case Strength Score + Lawyer Match (F-12) — a 0–100 score calculated from automation likelihood, bias signals, vendor identified, and laws applicable. If above 65, surfaces pro-bono employment law organisations specialising in algorithmic discrimination by country. Tells the worker and the lawyer whether this is worth fighting for.

  • Collective Pattern Aggregator (F-08) — with consent, anonymized case data is stored. When 15+ workers at the same company show the same bias pattern, the system flags a potential class action and connects affected workers via opt-in. Example: "23 workers at [Major Tech Corp] all over 47, all scored low on collaboration metrics, all terminated Q1 2026. Same Workday pattern." This is the world's first crowdsourced algorithmic discrimination detector. Individual workers can't see a company-wide pattern. GhostWorker can.

  • Risk Radar Visualization — pentagon chart showing Automation Likelihood, Bias Risk, Legal Violation Severity, Evidence Urgency, and Case Strength at a glance. So judges watching from across the room see something, not just a wall of text.

  • Multi-jurisdiction + Gig Worker Mode — one dropdown switches the entire legal analysis between EU, US Federal, NYC, California, UK, and India. Separate dedicated flow for Uber, Deliveroo, and Amazon delivery workers deactivated by algorithm different phrase patterns, different legal rights. No existing tool covers gig workers at all.

How I built it

First two days was just reading. EU AI Act full text, GDPR Article 22 case law, EEOC guidelines, all the state-level regulations. I wasn't going to ship something that generates fake statute citations a wrong citation is worse than no citation. Every law reference is grounded against the actual public legal text, stored in a LanceDB vector database for semantic retrieval.

The hardest part was the vendor fingerprint database. HR software companies don't publish what language their systems use. I had to reverse-engineer it collected real termination letters shared anonymously on public forums, cross-referenced with known platforms, and built a phrase-pattern database from scratch. Some platforms are very distinctive. Some leave almost no trace. For those I built honest confidence scoring rather than always claiming a match.

The agent pipeline uses LangGraph six agents in a supervised chain. Supervisor coordinates everything and skips steps that aren't needed. No vendor found = no vendor section in the letter. No bias detected = discrimination statutes don't get cited. Output stays clean.

Stack:

  • LLM: Groq API (Llama 3.3 70B) for detection agents speed matters. Claude Sonnet for the AppealDrafter specifically letter quality matters more than latency there.
  • Agents: LangGraph + LangChain
  • Parsing: PyMuPDF, pdfplumber, python-docx, SpaCy NER
  • Backend: FastAPI + Uvicorn → Render.com free tier
  • Database: Supabase (vendor registry + collective case store) + LanceDB (legal knowledge RAG)
  • Frontend: Next.js + TailwindCSS → Vercel
  • PDF output: ReportLab — real downloadable PDFs, not browser print
  • Visualization: Chart.js for Risk Radar and case timeline

Total infrastructure cost: $0. Everything free tier.

Challenges I ran into

Legal accuracy was the thing that kept me up. A letter citing the wrong statute, or applying EU law to a US case, isn't just wrong it's harmful. I embedded the actual legal text as grounding material and built jurisdiction detection that has to be right before any statute gets cited.

Vendor fingerprinting gaps. Some HR platforms have almost no distinctive language in their output. I had to decide: guess and risk being wrong, or return "unable to identify with confidence." Went with honest uncertainty. Probably not the flashiest demo moment but it was the right call.

The evidence timing problem. I originally had the checklist as the last output. Then I realised if someone spends 10 minutes reading the analysis first, they might have already lost account access. Restructured so EvidenceAgent fires first and its output renders immediately while the rest of the pipeline runs in parallel.

Making the Collective Aggregator demoable. F-08 is the most innovative feature but it's invisible without real user data over time. I seeded Supabase with 50 anonymized cases from the same fictional company showing the same Workday age bias pattern. The "47 other workers at this company show the same pattern — potential class action" moment in the demo actually fires from real data, not a mock.

Getting the pipeline under 45 seconds. Early builds took 3–4 minutes. Parallel agent execution where possible, model switching for different agents, caching the legal reference material. Took a lot of iteration.

Accomplishments I'm proud of

The vendor fingerprinting working at all. The idea that you can read a termination letter and identify which software generated it from the language alone felt like it might not be possible when I started.

The appeal letter quality. Had someone with a legal background review three output letters without telling them they were AI-generated. Said they were "more structured and legally coherent than most letters from non-specialist employment lawyers." That was a good moment.

The Surveillance Metric Invalidator. The peer-reviewed research on how useless keystroke tracking is as a performance predictor is genuinely damning. Building something that turns that research into an automatic legal rebuttal felt like giving workers a weapon that exists but nobody told them about.

The Collective Aggregator firing correctly in the demo. That class action moment working live not as a mock was a relief.

What I learned

Rights that exist but are inaccessible might as well not exist. GDPR Article 22 has been law since 2018. Most workers and most HR departments have never heard of it. The gap isn't the law. It's the tool.

Honest confidence scoring matters more than confident-sounding output. The system saying "low confidence, possible match" is more valuable than manufacturing three strong matches. Almost shipped the confident version. Glad I didn't.

The UX of urgency is a real design problem. The 24-hour countdown isn't drama it's accurate. Getting the tone right for someone who just lost their job and is scared took more iteration than any other single element.

Building something grounded in real law forces precision in a way that building a generic AI tool doesn't. Every vague output decision has a consequence when the output is a legal document someone might actually send.

What's next for GhostWorker

The "What They Knew" Timeline — reconstructing how long the algorithm was tracking the worker silently before termination, from dates and score references in the letter. Showing someone that the AI had been building a case against them for six months before they knew anything is wrong is both a powerful legal exhibit and just important information to have.

Real Company AI Policy Checker — fetch the company's own public statements about how they use AI in HR decisions, then compare against what the analysis found. "We use AI to assist, not replace, human judgment" versus "this letter shows zero evidence of human review" is a legal gap that's completely automatable to expose.

Voice input — a lot of gig workers can't type or upload files. Whisper API, any language, describe your situation verbally, GhostWorker extracts the relevant facts and builds the case. The rights belong to every worker, not just the ones who can navigate a file upload.

Pro-bono lawyer network — connecting cases above 75/100 directly to employment lawyers who take algorithmic discrimination cases on contingency.

The long version is simple. The Collective Pattern Aggregator gets stronger with every case submitted. Enough workers, same company, same bias pattern, same vendor that stops being a tool and starts being a class action filing. That's where this ends up.

Built With

  • chart.js
  • claude-sonnet-api-(anthropic)
  • eeoc-guidelines
  • eu-ai-act
  • fastapi
  • gdpr-article-22
  • groq
  • javascript
  • lancedb
  • langchain
  • langgraph
  • llama-3.3-70b
  • next.js
  • nyclocallaw144
  • openrouterapi
  • pdfplumber
  • postgresql
  • pymupdf
  • python
  • python-docx
  • render.com
  • reportlab
  • spacy
  • supabase
  • tailwindcss
  • typescript
  • uvicorn
  • vercel
Share this project:

Updates