InspirationInspiration
Hiring should be fair — but in reality, hidden biases still influence who gets shortlisted or rejected. While building AI systems in other domains, we realized recruitment AI itself inherits human and historical bias. This inspired us to build a system that protects opportunity, ensures fairness, and keeps hiring truly skill-based. FairHire 360 was born from a simple belief: talent should never lose because of bias.
What it does
FairHire 360 is a bias-detection and fairness-intelligence system for recruitment. It:
Detects gender, age, college, region, and experience biases in resumes, JDs, and interviews
Creates counterfactual candidates (removing gender/college/location) to test fairness
Generates a FairScore that ranks candidates purely on skills
Profiles recruiter decision patterns to identify unconscious biases
Produces real-time fairness heatmaps and compliance-ready audit reports
Suggests a de-biased, explainable shortlist with transparent reasoning
How we built it
We designed a multi-agent AI architecture, with each agent handling a specific fairness function:
Parsing Agent: Extracts structured information from resumes, JDs, and interviews using spaCy, LayoutLM, and custom NER models
Bias Detection Agent: Uses AIF360, FairLearn, SHAP/LIME, statistical methods, and embedding-based similarity to detect multi-dimensional bias
Fair Scoring Agent: Applies our Skill-Based Normalized Ranking (SBNR) to remove demographic influence and generate unbiased candidate ranking
Governance Agent: Builds fairness heatmaps, bias trend reports, and legal-compliance PDFs
Frontend + Infra: React, Tailwind, D3.js, FastAPI, Pinecone/FAISS, PyTorch, Docker
Challenges we ran into
Designing bias detection models that work across structured + unstructured data
Ensuring SHAP-based explainability for complex pipelines
Building counterfactual candidate twins that preserve semantics
Handling multilingual resumes and diverse job roles
Creating a recruiter bias profiler without exposing personal identity
Balancing fairness constraints with real-world hiring accuracy
Accomplishments that we're proud of
Achieved up to 90% reduction in bias signals during shortlisting tests
Built a recruiter behavior bias analyzer — a unique feature rarely seen in HR tech
Created an explainable AI pipeline with transparent, skill-first scoring
Designed a fairness audit engine that generates professional, compliance-ready reports
Built a highly intuitive dashboard for HR teams with real-time bias visualization
Demonstrated strong technical depth using counterfactual fairness & adversarial debiasing
What we learned
Bias is layered: statistical, semantic, behavioral, and historical
Explainability matters more than accuracy in ethical AI
Multi-agent architectures reduce failure points and increase modularity
Fairness constraints require balancing between models and human workflows
Ethical AI requires domain knowledge, tech depth, and policy awareness together
What's next for FairHire 360 — Bias-Free AI Recruitment System
AI-generated interview analysis to detect micro-bias and tone-based scoring distortion
Integration with ATS platforms (Greenhouse, Lever, Naukri, LinkedIn Talent)
A fairness certification API for HR tech companies and enterprises
Deployment for university placement cells to ensure unbiased student hiring
Creating an open-source fairness benchmark dataset for the research community
Scaling the system into a full “Ethical Hiring OS” for organizations globally
Built With
- aif360
- api
- architecture
- css
- d3.js
- docker
- fairlearn
- faiss
- fastapi
- groq
- langchain
- lime
- multi-agent
- node.js
- pinecone
- postgresql
- python
- pytorch
- react.js
- scikit-learn
- shap
- tailwind
- transformers
Log in or sign up for Devpost to join the conversation.