Inspiration Clinical AI systems are increasingly being used in hospitals for diagnosis, triage, and treatment recommendations. However, many of these models are trained on datasets that underrepresent women, elderly patients, darker-skinned individuals, and low-income populations. This creates hidden algorithmic bias that can lead to unequal healthcare outcomes. We were inspired by the growing need for responsible and transparent AI in healthcare. Today, hospitals lack accessible tools to audit AI systems for fairness before deployment. Most compliance reviews are manual, time-consuming, and difficult to interpret for non-technical stakeholders. FairCare AI was built to bridge that gap by giving hospitals and ML teams a practical internal platform to detect, explain, and remediate bias in clinical machine learning systems before real patients are affected.
What it does FairCare AI is an enterprise-grade internal governance platform that audits clinical AI models for demographic bias and compliance risks. The platform allows hospitals and AI teams to:
Run automated fairness audits on clinical machine learning models
Detect demographic disparities using metrics like Demographic Parity and Equalized Odds
Visualize which patient groups are negatively impacted
Use SHAP explainability to identify proxy bias in sensitive features
Apply fairness remediation algorithms in real time
Generate compliance-ready PDF audit reports
Receive AI-generated deployment recommendations aligned with regulations such as the EU AI Act and India’s DPDP Act
Interact with the system through a voice-powered AI audit assistant
FairCare AI transforms AI governance from a manual review process into an interactive, explainable, and scalable workflow.
How we built it We built FairCare AI using a modern full-stack AI architecture. Frontend
React 18
Vite
Tailwind CSS
Recharts for fairness visualizations
jsPDF for compliance report generation
Lucide React for UI icons
Backend
FastAPI for high-performance APIs
Scikit-learn for baseline clinical models
Fairlearn for fairness metrics and remediation
SHAP for explainable AI analysis
Pandas and NumPy for data processing
AI Features
Google Gemini API for compliance reasoning and AI-generated insights
Gemini TTS for the voice-to-audit assistant
Deployment
Firebase Hosting for frontend deployment
Google Cloud Run for scalable backend hosting
Docker for containerization
We designed the system as a four-tab dashboard focused on usability for compliance officers, hospital administrators, and ML engineers.
Challenges we ran into One of the biggest challenges was translating complex fairness mathematics into a user-friendly experience that non-technical healthcare stakeholders could understand. Another major challenge was integrating multiple AI systems together:
fairness auditing
explainability
remediation
voice interaction
compliance generation
Balancing model accuracy with fairness remediation was also difficult. Improving fairness metrics sometimes affected prediction performance, so we had to carefully visualize those trade-offs in real time. Handling large healthcare datasets efficiently while keeping the UI responsive was another technical challenge during development.
Accomplishments that we're proud of We are proud of building a fully integrated AI governance workflow instead of just a standalone fairness dashboard. Some accomplishments include:
Real-time fairness remediation visualization
Interactive “patients saved” tracking
AI-generated compliance passport reports
Voice-powered audit explanations
Production-style enterprise dashboard design
End-to-end explainability pipeline using SHAP
We are especially proud that FairCare AI combines technical depth with real-world usability and addresses a meaningful healthcare problem.
What we learned Through this project, we learned that responsible AI is not just a technical problem — it is also a communication and governance problem. We gained hands-on experience with:
algorithmic fairness engineering
explainable AI
multimodal AI systems
enterprise dashboard design
AI compliance workflows
We also learned how important transparency is when deploying AI systems in high-stakes environments like healthcare. Most importantly, we learned how difficult — and essential — it is to build AI systems that are both accurate and equitable.
What's next for FairCare Our next goal is to evolve FairCare AI into a full enterprise AI governance platform for healthcare organizations. Future plans include:
Integration with hospital EHR systems
Continuous real-time model monitoring after deployment
Support for additional fairness frameworks and regulations
Team-based approval workflows for compliance officers
Audit history and model version tracking
Federated privacy-preserving bias analysis
Expanded multilingual voice support
Support for image-based medical AI systems such as radiology models
Long term, we envision FairCare AI becoming a standard safety layer for trustworthy clinical AI deployment worldwide.
Log in or sign up for Devpost to join the conversation.