Inspiration

Bias in algorithmic loan approvals can silently harm underrepresented groups, especially in financial services. We wanted to build a responsible AI system that not only detects bias but actively reduces it — with full transparency and real-time explainability.

What it does

FairLoans is an AI-powered auditing and mitigation pipeline that:

  • Detects bias in historical loan approval data using Fairlearn
  • Applies Exponentiated Gradient mitigation with Demographic Parity constraints
  • Visualizes fairness metrics like demographic parity difference, equalized odds, and selection rate
  • Provides SHAP explainability to demystify model behavior
  • Deploys a Streamlit Dashboard for interactive simulation, fairness auditing, and test predictions

How we built it

  • Exploratory analysis using Pandas, Seaborn, and Matplotlib
  • Fairness detection & mitigation with Fairlearn
  • Model training using XGBoost and fairness-constrained algorithms
  • Interpretability via SHAP values and summary plots
  • Frontend dashboard in Streamlit with upload, visualize, and predict modules
  • Submission pipeline for real-world test set predictions

Challenges we ran into

  • Handling fairness–accuracy trade-offs in real-world imbalanced datasets
  • Encoding categorical variables consistently across test/train for reliable results
  • Ensuring transparency without sacrificing performance
  • Explaining decisions clearly for non-technical users

Accomplishments that we're proud of

  • Successfully detected and quantified bias in a real-world loan approval dataset
  • Implemented fairness mitigation using Exponentiated Gradient with Demographic Parity constraints
  • Reduced Equalized Odds Difference from 0.17 to 0.09 without severely sacrificing model accuracy
  • Developed a fully functional Streamlit Dashboard that supports:
    • Interactive fairness audits
    • SHAP-based explainability
    • Real-time loan approval simulation
    • Final test-set predictions
  • Delivered a clean and reproducible codebase with modular scripts for training, explanation, and prediction
  • Built a submission-ready CSV file for Devpost that reflects ethical and accountable machine learning
  • Deployed the entire solution live using Streamlit Cloud, making it publicly accessible
  • Learned to balance performance, fairness, and interpretability — the trifecta of responsible AI

What we learned

  • Fairness is not just a metric — it’s a design choice that affects real lives
  • SHAP can be powerful in making AI models explainable and auditable
  • Tools like Fairlearn and AIF360 can help bridge ethical AI and engineering

What's next for FairLoans– Debiasing Loan Approval Models for Responsible AI

  • Extend FairLoans to other sensitive domains (e.g., hiring, healthcare)
  • Build an enterprise-grade SaaS dashboard for fairness audits
  • Enable continuous monitoring of model drift and fairness over time

Built With

Share this project:

Updates