Inspiration

The explosion of AI in healthcare holds promise—but also danger. Misinformation, unethical advice, or unauthorized access to medical data can be harmful or even fatal. Inspired by global regulations like the EU AI Act, GDPR, and FISMA, we built a governance-first AI system that prioritizes safety, transparency, and role-based control.

What it does

AI Medicare is a role-aware, governance-compliant GenAI system for medical use cases.

Key features include:

Role-based access (Admin, User, Analyst)

Prompt moderation using Toxic-BERT + keyword filters

LLM generation using Meta LLaMA 3 (8B) via Together AI API

Admin-only access to sensitive structured datasets

Full logging of prompts, outputs, and decisions

Analyst dashboard with paginated audit logs

Feedback loop for ongoing improvement

How we built it

Backend: Flask (Python), SQLite, Pandas

LLM: Meta LLaMA 3 (8B) via Together AI

Moderation: Toxic-BERT (local pipeline)

Frontend: Jinja2 templates + Bootstrap 5

Governance:

Prompt Guard (Toxic-BERT + Banned Keywords)

Policy Enforcer (Role-based restrictions)

Output Auditor (detects misinformation, bias, unsafe content)

Challenges we ran into

Balancing strict filtering with informative outputs

Handling edge-case prompts (e.g., “natural cancer cures”)

LLM API response time and token quota constraints

Managing access to sensitive health data without violating ethics

Creating an intuitive UI that’s still governance-aware

Accomplishments that we're proud of

Integrated real-time prompt + output moderation using open models

Built a complete policy engine that maps to GDPR, EU AI Act, and ISO/IEC 42001 principles

Delivered medical LLM outputs with transparency and safeguards

Developed an auditor-facing dashboard that allows investigation and traceability

Created a feedback loop that informs future moderation strategies

What we learned

Moderation is not just post-processing—it must start at the prompt layer.

Roles matter. Admins, users, and analysts have fundamentally different access needs.

Transparency and explainability (like " Reason: Prompt flagged as toxic") build user trust.

Policy-driven GenAI is viable with open-weight models + lightweight infra.

What's next for AI Medicare

Live alerting for harmful queries or repeated misuse

Plug into EHR or hospital systems for real-time deployments

Swap SQLite for PostgreSQL + move to scalable FastAPI microservices

Add fine-tuning dashboards and multiple LLM backends

Expand to domains like law, finance, or education with domain-specific datasets and policies

Built With

Share this project:

Updates