Inspiration

I’ve always loved AI, development, and the creative challenge that comes with hackathons. I’m passionate about building smart solutions with real world impact, and this project was a great way to push my skills further especially in fairness and model explainability.

What I built

I built a loan approval classifier and audited it for gender bias. Using SHAP, I visualized which features drove each decision. Then, I applied Fairlearn’s Exponentiated Gradient algorithm to reduce bias. The final model improved both fairness metrics and stayed accurate.

How I built it

I used Python in a Google Colab notebook, trained a Logistic Regression model, and implemented SHAP for explainability. Then I applied bias mitigation using Fairlearn. Every step was measured and visualized to ensure the changes truly improved fairness without harming accuracy.

Challenges

Getting SHAP and Fairlearn to work together took careful testing. Also, interpreting fairness metrics like Equalized Odds and Demographic Parity took time to fully understand.

What I learned

I learned how real world AI can carry invisible bias, and how to use tools to fix it. Most importantly, I saw how data science can actually build trust and transparency when done responsibly.

Built With

Share this project:

Updates