Inspiration

Healthcare AI is increasingly used to decide who gets critical care first—but most systems are optimized only for accuracy. This creates a dangerous blind spot: models can perform well overall while systematically excluding vulnerable groups.

We were inspired by the “accuracy paradox”—where a model can be 99% accurate yet fail entire minority populations. In our dataset alone, 847 marginalized patients were wrongly denied care.

FairCare AI was built to answer one critical question: “Who is being left behind by AI?”

⚙️ What it does

FairCare AI is a Clinical Bias Audit & Compliance Engine that ensures healthcare AI systems are fair before deployment.

It:

🔍 Audits models using fairness metrics like Demographic Parity & Equalized Odds ⚖️ Detects hidden bias and proxy discrimination 🔧 Optimizes models using fairness-constrained tuning 📊 Shows real-time trade-offs between accuracy and fairness 📄 Generates a Clinical Bias Passport (automated compliance report)

👉 Result: More patients included, less bias, safer AI deployment.

🛠️ How we built it

We designed a three-pillar architecture:

  1. Deep Audit Engine

Built with FastAPI + Scikit-learn Processes 200K+ healthcare records Calculates fairness gaps across demographic groups

  1. Remediation Engine

Implements fairness-constrained optimization Allows dynamic tuning of accuracy vs fairness Visualizes how many patients are “recovered” into care

  1. Compliance Generator

Uses Gemini 1.5 Flash Converts metrics into legal-grade reports Aligns with frameworks like EU AI Act & India DPDP

Frontend provides an interactive dashboard for real-time insights.

⚠️ Challenges we ran into Balancing fairness vs accuracy without overfitting Detecting proxy discrimination when sensitive attributes are removed Translating complex fairness metrics into simple, understandable insights Designing a system that is both technical and regulator-friendly Ensuring the solution feels like a product, not just a research model 🏆 Accomplishments that we're proud of Reduced bias gap from ~25% to under 10% Recovered hundreds of excluded patients into the care pipeline Built a working prototype, not just a concept Created a regulatory-ready AI audit system Designed a solution that is scalable across hospitals and regions 📚 What we learned Accuracy alone is not a reliable metric in high-stakes AI Fairness requires intentional design, not post-processing fixes Small trade-offs in accuracy can lead to massive ethical gains Regulatory compliance will become mandatory in AI deployment The real challenge is making AI understandable to humans 🔮 What's next for FairCare Integrate directly with hospital AI pipelines (pre-deployment audit layer) Expand support for more global regulations Add explainability (why decisions are biased) Build a SaaS platform for healthcare institutions Extend beyond healthcare into finance, hiring, and public systems

Built With

Share this project:

Updates