Inspiration

Career centers in Fort Worth see the same pattern every day. Someone walks in needing a job, but it's never just about the job. No car means no commute. No childcare means no interview. A criminal record means half the listings don't apply. Taking the wrong job triggers a benefits cliff that leaves a worker poorer than they were before the raise.

Career-center staff cross-reference benefits thresholds, transit timetables, fair-chance hiring policies, criminal-record statutes, and credit thresholds by hand, per resident, every day. Then the next person walks in and they start over.

We built GoWork because that work is too important to keep doing by hand — and because the missing piece isn't a chatbot. It's the math.


What it does

A resident answers a guided assessment across seven barrier types — credit, transportation, childcare, housing, health, training, criminal record — and walks out with a same-day, personalized case file:

  • Practical Value Score ranking jobs by what actually matters
  • Benefits-cliff detection showing where a raise nets a loss
  • Criminal-record routing through Texas Article 55 expunction and Government Code Chapter 411 nondisclosure — the dual-pathway system most tools miss
  • Real resume → job semantic matching against 64 Fort Worth jobs and 36 local resources, fair-chance employers first
  • Multi-provider AI narrative with Claude, OpenAI, Gemini, and a local Qwen 2.5 14B fallback via Ollama
  • Career Center Ready Package: a printable two-part PDF — staff briefing on top, resident action plan below

The case file is what staff would build by hand, generated in under three seconds.


How we built it

Frontend: Next.js 15 (App Router) · React · TypeScript · Tailwind · shadcn/ui · Mapbox GL.

Backend: FastAPI on Python 3.13 · async SQLAlchemy · SQLite · FAISS vector store · 33-node barrier graph (DAG).

AI: Multi-provider router (Anthropic Claude, OpenAI, Google Gemini, local Qwen 2.5 14B via Ollama) with deterministic fallback.

Data sources: Texas Workforce Commission · USAJobs · BrightData · Honest Jobs · Trinity Metro · HHSC · Tarrant County records.

Tests: 1,391 backend (pytest) · 417 frontend (Vitest) · 100% deterministic — every test passes without any LLM call.

The math that earns its place in the product

The Practical Value Score is a weighted combination of four normalized signals:

$$ \mathrm{PVS}(\text{job}, \text{user}) = 0.35 \cdot \mathrm{NetIncome}

  • 0.25 \cdot \mathrm{Proximity}
  • 0.20 \cdot \mathrm{Schedule}
  • 0.20 \cdot \mathrm{BarrierFit} $$

The benefits cliff is where the product earns its keep. Most tools model post-tax income as a smooth function of wage:

$$ \text{naive_net}(w) = w \cdot h - \tau(w) $$

That hides the cliff. The real model has stair-step phase-outs across SNAP, Medicaid, CHIP, child-care subsidy, and TANF:

$$ \text{net}(w) = w \cdot h - \tau(w) + \sum_{i \in \mathcal{B}} b_i(w) $$

where each $b_i(w)$ is a piecewise-linear benefit with discontinuous drops at program-specific thresholds. A cliff wage $w^{}$ is any wage where the *derivative of net income is negative even though the wage is rising:

$$ \exists\, w^{} : \quad \frac{d\,\text{net}}{dw}\bigg|_{w^{}} < 0 \quad \text{while} \quad \frac{dw}{dw} = 1 $$

For Carlos, our Fort Worth reference persona, the model finds a cliff at $w^{} \approx \$18.50$ /hr — a two-dollar raise from $\$16.50$ that *nets a $\$400$ /month loss once SNAP, child-care subsidy, and Medicaid phase out. We surface that cliff before the offer is signed.

Methodology

GoWork was built using PairCoder, our enforcement-driven AI pair- programming framework. Every feature started as a failing test. The barrier graph, PVS scoring, benefits-cliff calculations, and expunction/nondisclosure routing all have deterministic test coverage because the enforcement layer required it before any code shipped.


Challenges we ran into

Migrating the reference deployment from Montgomery, AL to Fort Worth, TX without breaking the city-agnostic architecture. Tests had to assert behavior, not literal Montgomery values. We pulled hard-coded city defaults into a per-request city_context layer and let each test inject its own city.

Modeling cliffs as discontinuous functions instead of smoothing them away. Real-world benefit programs phase out in steps, not curves. Smoothing hides the cliff — exactly the thing we're trying to surface. We model each program with its actual stair-step thresholds and detect the local minima of $\text{net}(w)$ explicitly.

Routing criminal records across two Texas pathways. Article 55 expunction and Chapter 411 nondisclosure have overlapping but distinct eligibility rules — offense class, time elapsed, sentence completion, prior record, and specific statutory carve-outs. Conflating them is the most common mistake in workforce tooling. We model them as two independent evaluations and present the dual result.

Real resume → job semantic matching without leaking PII. The matcher runs in-process against a per-request embedding cache; resume text never leaves the request scope, never touches the LLM provider, never persists to disk.

Bundle size on a chapter-driven scrollytelling home page with Mapbox + 3D barrier graph. We lazy-load the heavy components, gate non-critical chapters behind IntersectionObserver, and keep first-load JS under 175 KB.

Multi-provider LLM routing without a vendor lock-in. The provider adapter normalizes streaming chunks across Claude, OpenAI, Gemini, and an OpenAI-compatible local Qwen via Ollama — same downstream interface, four different SDK shapes upstream.


Accomplishments that we're proud of

  • Same-day case file. The deliverable a resident walks out with is the same one staff would build by hand — generated in under three seconds.
  • Zero-PII assessment flow. No accounts, no SSN, no hard credit pull. Self-reported credit. Session-scoped storage with a 30-day expiry.
  • City-agnostic architecture. Plugging in a new city is a YAML + seed-data exercise, not a code change. Two cities deployed today.
  • Multi-provider LLM with offline fallback. The product works fully offline. The mock provider generates city-correct prose without any API call, and the local Qwen path gives us a free, private inference layer.
  • Dual-pathway expungement screening for Texas Article 55 and Chapter 411 — the first workforce tool we know of that models them as two independent evaluations instead of one fuzzy "you have a record" branch.
  • 1,800 deterministic tests. Every behavior — barrier graph traversal, PVS scoring, cliff detection, expungement routing — is covered by tests that don't need an LLM. The AI is frosting on a deterministic core.
  • Open source, MIT licensed. Built to be forked by the next workforce program in the next city.

What we learned

  • Most "AI for workforce" tools are job boards with a chatbot. The actual work is the math and the routing. The LLM is the last 5%, not the first.
  • Local-model inference (Qwen 2.5 14B via Ollama) is good enough for narrative generation when paired with FAISS-backed RAG and a tight system prompt — and it's free.
  • The barrier graph (resolve one → three others move within reach) is the most underused mental model in this space. Most tools treat barriers as a flat list. They aren't. They're a directed graph with conditional edges.
  • A printable PDF that fits in a folder still beats any web app for the case-worker handoff. Career-center staff don't want a login. They want a briefing.
  • Cliffs aren't curves. The single most important change to the income model was refusing to smooth the discontinuities away.
  • Enforcement-driven development (failing test → minimal code → refactor) produced a deterministic core that the AI layer sits cleanly on top of.

What's next for Go Work

  • Deploy four more Texas cities in the next two months — Dallas, Houston, Austin, San Antonio. Each is a YAML + seed-data drop.
  • Pilot with one Fort Worth career center to measure throughput delta vs. manual workflow.
  • Real-time job-feed deduplication across Texas Workforce Commission, USAJobs, and Honest Jobs.
  • Spanish parity sweep — the live URL already supports ?locale=es on every page; the next pass is voiceover and localized resource metadata.
  • Outcome instrumentation so the next iteration of the cliff model trains on real wage-trajectory data instead of static thresholds.
  • Case-worker portal: same engine, surfaced as a queue + assignment dashboard for staff. The PDF stays — it's the handoff — but the staff view becomes a workspace.

Built With

  • anthropic-claude
  • apscheduler
  • brightdata
  • docker
  • eslint
  • faiss
  • fastapi
  • github-actions
  • google-gemini
  • honest-jobs
  • html2pdf.js
  • javascript
  • lighthouse-ci
  • mapbox-gl-js
  • next.js
  • ollama
  • openai
  • playwright
  • pydantic
  • pytest
  • python
  • qrcode.react
  • qwen-2.5
  • rag
  • react
  • ruff
  • shadcn/ui
  • sqlalchemy
  • sqlite
  • tailwind-css
  • texas-workforce-commission-api
  • typescript
  • usajobs-api
  • uvicorn
  • vercel
  • vitest
Share this project:

Updates