Inspiration

Healthcare AI is increasingly used to prioritize critical care—but most systems are optimized only for accuracy.

This creates a dangerous blind spot: A model can perform well overall while systematically excluding vulnerable groups.

In our dataset alone, 847 marginalized patients were wrongly denied care.

FairCare AI was built to answer one critical question: 👉 “Who is being left behind by AI?”

⚙️ What it does

FairCare AI is a Clinical Bias Audit & Compliance Engine that ensures healthcare AI systems are fair before deployment.

It:

🔍 Audits models using fairness metrics like Demographic Parity and Equalized Odds ⚖️ Detects hidden and proxy bias 🔧 Optimizes models using fairness-constrained tuning 📊 Visualizes trade-offs between accuracy and fairness in real time 📄 Generates a Clinical Bias Passport for regulatory compliance

👉 Result: More inclusive decisions, reduced bias, and safer AI deployment.

🛠️ How we built it

We designed a three-pillar architecture:

  1. Deep Audit Engine

Built with FastAPI + Scikit-learn Processes 200K+ healthcare records Computes fairness gaps across demographic groups

  1. Remediation Engine

Implements fairness-constrained optimization Dynamically balances accuracy vs fairness Shows how many patients are “recovered” into care

  1. Compliance Generator

Powered by Gemini 1.5 Flash Converts metrics into legal-grade reports Aligns with EU AI Act & India DPDP

Frontend:

Interactive dashboard using Chart.js + Plotly ⚠️ Challenges we ran into Balancing fairness and accuracy without overfitting Detecting proxy discrimination when sensitive attributes are removed Translating complex fairness metrics into simple insights Designing a system usable by both engineers and regulators Making it feel like a real product, not just a research model 🏆 Accomplishments that we're proud of Reduced bias gap from ~25% to under 10% Recovered hundreds of excluded patients into the care pipeline Built a working, end-to-end prototype Created a regulatory-ready audit system Designed a scalable solution for hospitals and beyond 📚 What we learned Accuracy alone is not enough in high-stakes AI Fairness must be built into the system—not added later Small accuracy trade-offs can create major ethical benefits AI regulation is becoming essential, not optional The biggest challenge is making AI understandable and trustworthy 🔮 What's next for FairCare Integrate directly with hospital AI systems as a pre-deployment audit layer Add explainability (why decisions are biased) Expand support for global regulatory frameworks Launch as a SaaS platform for healthcare institutions Extend into finance, hiring, and public decision systems

Built With

Share this project:

Updates