Inspiration

As AI increasingly powers high-stakes decisions like loan approvals, I became concerned with how hidden biases in training data can lead to unfair outcomes. I created LoanWatch to explore how machine learning models can be made not just accurate, but also accountable and transparent especially in domains where fairness is critical.

What it does

LoanWatch is a fairness-aware AI pipeline that:

  • Predicts loan approval outcomes using applicant data
  • Detects bias across sensitive attributes such as race, gender, age group, and disability status
  • Explains predictions using SHAP visualizations and Groq-powered natural language summaries
  • Audits model fairness using tools like Fairlearn and AIF360
  • Visualizes disparities in approval rates, false negatives, and feature importance
  • Offers a comparison between the original and fairness-mitigated models

How I built it

  • Used the provided loan_access_dataset.csv and test.csv for training and evaluation
  • Built the baseline model using XGBoost, then applied Fairlearn's Exponentiated Gradient for fairness post-processing
  • Conducted bias audits using SHAP, Fairlearn, AIF360, and statistical metrics like Disparate Impact
  • Developed a React-based frontend dashboard to run predictions and display fairness insights
  • Integrated Groq to generate plain-language explanations
  • Created a comprehensive AI Risk Report and supporting visuals as part of the final submission

Challenges I ran into

  • Identifying intersectional bias that only becomes visible when combining protected attributes
  • Balancing predictive accuracy with fairness constraints, which required iterative tuning
  • Ensuring that SHAP plots remained interpretable across diverse subgroups
  • Addressing data imbalance in underrepresented demographics
  • Designing a UI that clearly communicates fairness insights without overwhelming the user

Accomplishments I'm proud of

  • Built an end-to-end pipeline that covers training, auditing, bias mitigation, and reporting
  • Detected and visualized multiple types of bias including approval disparities and false rejection patterns
  • Developed a clean, user-friendly UI for visual bias inspection and real-time predictions
  • Successfully improved fairness metrics while maintaining strong model performance
  • Delivered a polished submission package that includes documentation, visuals, and explainability

What I learned

  • Bias detection must be a core part of the ML lifecycle, not an afterthought
  • Intersectionality reveals patterns that single-attribute analysis often misses
  • Tools like SHAP and Groq can bridge the gap between technical insights and human understanding
  • Ethical AI design requires a balance of data science, UX, and regulatory thinking
  • Continuous fairness evaluation is essential, even after deployment

What's next for LoanWatch

  • Implement adversarial debiasing or causal modeling to improve bias mitigation
  • Expand to real-world datasets and add support for financial APIs
  • Introduce natural language denial explanations to improve applicant transparency
  • Build a feedback loop to allow continuous model retraining based on fairness signals
  • Package LoanWatch as a compliance-friendly toolkit for fintech platforms and financial institutions

Built With

Share this project:

Updates