Inspiration

Recruiters increasingly rely on automated screeners, yet most tools are opaque, cloud-locked, and can amplify bias. I wanted a resume screener that anyone can run locally, that explains its decision, and that checks itself for bias—so humans stay in the loop.

What it does

GPT-OSS Resume Scorer compares a PDF resume with a job description and returns: • A relevance score • A short summary • Evidence of matching skills/experience • Risks / Missing skills • A Bias Compare view that re-scores an anonymised version (name, email, phone, etc. removed) and shows the delta • Everything runs offline using open models.

How we built it

• Frontend: React + Vite. Simple upload form with buttons for Upload & Score, Bias Compare, and Warm Up Model.
• Backend: FastAPI with routes for /resume/upload, /resume/compare, /resume/warmup, /health.
• Inference: Ollama running llama3.2:3b (local LLM). Strict JSON prompts, small contexts, sane timeouts.
• Parsing: PyMuPDF (fitz) to extract clean text from PDFs; prioritises sections like Summary, Skills, and Experience.
• Bias check: Lightweight anonymiser (emails, phones, obvious names) + single-call compare prompt for speed.
• DX: Warmup endpoint to avoid first-token latency during demos.

Challenges we ran into

• Deterministic JSON from small models: solved with minimal templates, temperature≈0.1, and output keys validated in Python.
• PDF variability: resumes come in every format; added fallbacks and section prioritization before truncation.
• Latency on first call: added warmup and conservative num_predict/num_ctx to keep responses snappy on an M1 Pro.
• Bias compare speed: merged into one inference call to halve round-trip time.

Accomplishments that we're proud of

• Fully local and open-source stack—no PII leaves the laptop.
• Clear, explainable outputs (evidence + risks), not just a number.
• Bias-aware workflow that is practical enough for real reviews.

What we learned

• With careful prompting, small OSS models can be surprisingly effective for structured, explainable tasks.
• “Fairness features” are most useful when they’re one click and fast; otherwise they won’t be used.
• Developer ergonomics (warmup, robust error messages) matters a lot in demos and real use.

What's next for FairHire: GPT-OSS Resume Scorer

• Stronger anonymization (locations, organizations, gendered hints).
• Batch/multi-JD scoring and CSV export.
• Pluggable templates per role (DevOps, Data, Mobile, etc.).
• Optional cloud acceleration while keeping local-first as the default

Ethics & privacy

• Runs fully offline by default.
• Designed to augment, not replace, human judgment.
• Bias compare is a signal, not a veto; reviewers should interpret deltas thoughtfully.

This project shows how open models can deliver fairer, faster, and more transparent hiring—without sending private resumes to the cloud.

Built With

Share this project:

Updates