💡 Inspiration

Healthcare AI is increasingly used to decide who receives critical care—but most systems are optimized only for accuracy. This creates a dangerous blind spot: models can perform well overall while systematically excluding vulnerable populations.

We were inspired by the “accuracy paradox”—where a model can be highly accurate yet still fail entire minority groups. In our analysis, hundreds of marginalized patients were incorrectly deprioritized despite needing care.

FairCare was built to answer one critical question: “Who is being left behind by AI?”

⚙️ What it does

FairCare is a Clinical Bias Audit & Fairness Optimization platform for healthcare AI systems.

It:

Detects bias using fairness metrics like demographic parity and equalized odds Identifies hidden and proxy discrimination Compares accuracy vs fairness trade-offs in real time Suggests optimization strategies to reduce bias Generates a Clinical Bias Passport—a clear, structured compliance report

👉 Result: More equitable patient selection, reduced bias, and safer AI deployment.

🛠️ How we built it

FairCare is built using a modular, three-part architecture:

  1. Audit Engine Built with Python, FastAPI, and Scikit-learn Analyzes datasets and model predictions to compute fairness metrics across demographic groups

  2. Remediation Engine Applies fairness-aware techniques like re-weighting and threshold tuning Allows dynamic balancing between accuracy and fairness Visualizes how many patients are recovered into care

  3. Compliance Generator Transforms technical outputs into simple, structured reports Designed to align with emerging AI regulations

The frontend dashboard provides an interactive view of bias metrics, disparities, and improvements.

⚠️ Challenges we ran into Balancing fairness and accuracy without degrading model performance Detecting proxy bias when sensitive attributes are not explicitly available Making complex fairness metrics easy to understand for non-technical users Designing a system that feels practical, not just theoretical Presenting ethical AI concepts in a clear and actionable way 🏆 Accomplishments that we're proud of Reduced bias gap significantly while maintaining strong model performance Demonstrated recovery of previously excluded patients into the decision pipeline Built a working prototype with real-time analysis and visualization Created a system that bridges technical AI and real-world usability Designed a solution with potential for real-world healthcare deployment 📚 What we learned Accuracy alone is not enough in high-stakes systems like healthcare Fairness must be designed into AI systems—not added later Small improvements in fairness can create large real-world impact Transparency and explainability are essential for trust in AI Building responsible AI requires both technical and ethical thinking 🔮 What's next for FairCare Integrate with real-world healthcare AI pipelines as a pre-deployment audit layer Expand support for global AI compliance frameworks Add explainability features to show why bias occurs Develop a scalable SaaS platform for hospitals and organizations Extend the solution to other domains like finance, hiring, and public systems

Built With

Share this project:

Updates