Inspiration

The rise of AI in healthcare brings immense potential — but also serious risks around privacy, data integrity, and compliance. AI Curator was inspired by the challenge of ensuring that medical AI models remain secure, unbiased, and trustworthy.

By leveraging the power of Google Cloud’s AI and security stack, AI Curator builds confidence among healthcare providers, regulators, and patients, ensuring that AI-driven insights are safe, fair, and compliant.

What it does

AI Curator is a GCP Cloud Run solution designed to govern and secure medical AI models throughout their lifecycle.

It provides end-to-end protection by:

  • 🔒 Safeguarding patient data and preventing privacy breaches
  • 🧩 Maintaining model integrity against tampering or manipulation
  • ⚖️ Ensuring fairness and reducing bias in AI outcomes
  • 📜 Enforcing compliance with regulations like HIPAA
  • 🤝 Building trust with patients, clinicians, and stakeholders

Powered by Google Cloud Run, Firestore, and the Google Agent Development Kit (ADK), AI Curator delivers a scalable, secure, and auditable platform for managing and monitoring medical AI models.

How we built it

We built AI Curator using a modern, serverless architecture powered by Google Cloud Platform (GCP) to ensure scalability, security, and compliance.

🔧 Tech Stack

  • Frontend: Flask powered UIX for intuitive monitoring and governance dashboards
  • Backend: Serverless microservices deployed on Google Cloud Run , ** Apache Flask ** and realtime updates from ADK powered by **Socket.io"
  • Database: Cloud Firestore for secure, real-time storage and audit logging
  • AI Integration: Google Gemini API and Google Agent Development Kit (ADK) to automate model monitoring, policy enforcement, and anomaly detection

🔐 Architecture Highlights

  • Centralized governance for models, data, access, and compliance
  • Role-based security controls for data scientists, auditors, and compliance teams
  • Continuous monitoring to detect data drift, bias, and unauthorized access
  • Event-driven workflows for automated policy actions and alerts

Together, these components create a secure, scalable, and intelligent platform that protects medical AI models from data breaches, tampering, and compliance risks.

Challenges we ran into

Building AI Curator for the healthcare domain came with several technical and regulatory challenges:

  • 🧩 Data Privacy & Compliance:
    Handling sensitive patient data required strict adherence to HIPAA and GDPR standards while still enabling AI model access for validation.

  • 🔒 Security Enforcement:
    Implementing end-to-end encryption, secure APIs, and controlled access across multiple GCP services without affecting performance.

  • 🤖 Model Governance Complexity:
    Designing a unified framework to monitor diverse AI models, each with different data formats, risk levels, and validation criteria.

  • ⚖️ Bias and Fairness Checks:
    Ensuring medical AI models produced unbiased predictions while maintaining clinical accuracy.

  • 🚀 Integration with Existing Systems:
    Seamlessly connecting with existing healthcare infrastructure, data pipelines, and model registries.

  • ⏱️ Performance Optimization:
    Maintaining real-time responsiveness for model monitoring and audit logging in a serverless environment.

Despite these challenges, AI Curator evolved into a robust, scalable, and compliant solution for securing medical AI models.

Accomplishments that we're proud of

  • 🚀 End-to-End Secure AI Governance:
    Successfully built a unified platform that secures medical AI models from data ingestion to deployment.

  • 🧠 Automated Compliance Validation:
    Integrated real-time monitoring and rule-based validation aligned with HIPAA and GDPR standards.

  • 🔍 Bias and Drift Detection:
    Implemented intelligent checks to identify and alert on model bias, data drift, and integrity issues.

  • ☁️ Scalable Cloud-Native Architecture:
    Deployed seamlessly on Google Cloud Run with auto-scaling and zero downtime.

  • 🤝 Cross-Team Collaboration:
    Combined efforts from data engineers, cloud architects, and healthcare domain experts to build a trusted AI governance system.

  • 🌍 Impact on Responsible AI:
    Contributed to safer, fairer, and more transparent AI adoption in the healthcare ecosystem.

These accomplishments demonstrate AI Curator’s potential to redefine trust and accountability in medical AI systems.

What we learned

  • 🔐 Security and Privacy First:
    Building for healthcare taught us that AI innovation must always be grounded in data protection and patient trust.

  • 🧩 Complexity of Model Governance:
    Managing diverse AI models requires standardized governance frameworks that balance flexibility with compliance.

  • ⚖️ Ethical AI Is a Continuous Process:
    Fairness, bias detection, and transparency are not one-time checks — they need continuous monitoring and refinement.

  • ☁️ Power of Cloud-Native Design:
    Using Google Cloud Run, Firestore, and ADK showed how serverless architecture can simplify scaling and automation without compromising security.

  • 🤝 Interdisciplinary Collaboration:
    Success came from collaboration between AI engineers, clinicians, and compliance experts, reinforcing that responsible AI is a team effort.

  • 🚀 Innovation Within Constraints:
    Navigating strict healthcare regulations inspired creative ways to automate compliance while maintaining performance and usability.

What's next for AI Curator for Medical Privacy/Security

  • 🧠 Enhanced Explainability:
    Integrate AI interpretability tools to help clinicians understand and trust model predictions in critical care decisions.

  • 🧾 Expanded Compliance Frameworks:
    Extend support beyond HIPAA and GDPR to include FDA AI/ML guidelines and EU AI Act readiness.

  • 🔍 Proactive Threat Detection:
    Leverage AI-driven anomaly detection to identify potential data breaches or unauthorized model tampering in real time.

  • 🤖 Self-Auditing Models:
    Develop “auditable AI models” that log every inference, training event, and policy decision for transparent traceability.

  • 💬 Clinician Copilot Integration:
    Integrate with conversational agents to help clinicians validate AI outputs securely through natural language queries.

  • 🌍 Scalable Global Deployment:
    Roll out AI Curator as a plug-and-play medical AI governance service for hospitals, research labs, and MedTech startups worldwide.

AI Curator’s next phase aims to set new standards for medical AI safety, privacy, and accountability.

The live demo is hosted at : https://ai-curator-c7qrsmhagq-uc.a.run.app/ . Please have a look !

Built With

Share this project:

Updates