MedGuard AI: Building Trust in Medical AI
Imagine a world where AI assists doctors in making life-saving decisions, catches diseases earlier than ever before, and helps overwhelmed healthcare systems deliver better care to more patients. That world is within reach—but only if we can trust the AI powering it.
Hospitals are overwhelmed. This year, the estimated shortage of primary care doctors is between 15,000 and 35,000. AI can help: early detections, clinical decision support, and more can take the load off doctors to enable them to do more. But not as it is, right now: with hallucinations, FDA & HIPAA violations, false advice that harms patients. And currently, AI in healthcare sits in a legal gray area, because it isn't reliable or trustworthy.
Healthcare has incredible potential for AI integration, but trust, compliance, and accuracy issues prevent adoption.
That's why we built this to bridge that gap. Meet MedGuard AI: AI's credibility layer between AI and harm - by verifying information, preventing hallucinations, and ensuring compliance before any AI-generated medical response reaches patients or clinicians. We aren't trying to replace AI tools in healthcare: we aim to be a modular middleware between existing AI solutions and humans, enabling them to be reliable, trustworthy and production-ready.
Our Solution
Let's say you have an existing third-party AI solution integrated in your healthcare workflows, like clinical decision support for doctors. It takes in things like patient data, medical documents, and doctor queries; to output unverified LLM output which may contain hallucinations, false drug facts, or legal violations.
Our multi-agent system takes in this unverified LLM output, along with the original third-party solution's LLM prompt and any supporting documents. We then generate a comprehensive safety report which flags sentence-level hallucinations and legal violations; Outputs with high unreliability scores can then be re-generated or excluded entirely. So what doctors/patients see at the end, is solely helpful AI information verified to be true, fully backed up by trusted sources, and FDA compliant.
Our multi-agent system takes in this unverified LLM output, along with the original third-party solution's LLM prompt and any supporting documents. We then generate a comprehensive safety report which flags sentence-level hallucinations and legal violations; Outputs with high unreliability scores can then be re-generated or excluded entirely. So what doctors/patients see at the end, is solely helpful AI information verified to be true, fully backed up by trusted sources, and FDA compliant.
- Detects Hallucinations: Identifies fabricated or unsupported information in AI outputs
- Checks Compliance: Validates against HIPAA, FDA, and medical regulations
- Provides Verification Reports: Generates detailed compliance and accuracy assessments
- Integrates Seamlessly: Works as middleware between AI systems and end users
How We Built It
Our system uses a modular architecture that connects a React + TypeScript frontend with a FastAPI backend and multiple Python-based AI verification agents using Gemini.
Each agent has a specialized role:
- HallucinationGuard verifies claims against trusted medical databases.
- CitationChecker ensures references and citations are accurate.
- ComplianceChecker validates HIPAA compliance and checks for PHI leaks.
All agents communicate asynchronously with the backend, and the results are displayed in a real-time dashboard interface.
What We Learned
We learned how to actually integrate backend systems with frontend frameworks, how AI works under the hood, and the importance of implementing guardrails to keep models safe and reliable.
We also learned how to integrate multiple tools and APIs into one coherent system that verifies data, manages compliance, and prevents errors in real time.
Accomplishments We’re Proud Of
- We successfully integrated the frontend and backend after multiple iterations.
- We fully deployed our application and got it to work end-to-end.
- We overcame hallucination issues by implementing better verification and scoring.
- We collaborated closely as a team and never gave up, even when debugging took hours.
- We’re proud that the app is modular, functional, and has strong potential for future use in healthcare.
Challenges We Faced
We ran into multiple challenges:
- Deployment issues - connecting all components in production environments.
- Model hallucination - controlling AI to avoid false medical statements.
- Integration hurdles - syncing frontend and backend systems reliably.
Despite these issues, we pushed through and achieved a working prototype.
What's Next for MedGuard AI
We plan to expand MedGuard AI to a larger scale by:
- Implementing a robust database layer and Retrieval-Augmented Generation (RAG).
- Integrating more medical databases and APIs for stronger verification.
- Adding advanced tools for continuous compliance tracking.
- Deploying it across healthcare systems to improve trust in AI-assisted decision-making.
MedGuard AI is only the beginning, we're building a future where AI in medicine is accurate, compliant, and truly trustworthy.
Future Business Model
Our revenue model centers on B2B SaaS subscriptions tailored to different healthcare stakeholders:
Tiered Pricing Structure:
- API-based pricing - charge per verification request for smaller AI health tech startups
- Enterprise licensing - flat-rate subscriptions for hospitals and large healthcare systems based on volume and features
- White-label solutions - custom deployment packages for EHR vendors and medical device companies
Target Customers:
- Healthcare AI startups needing compliance verification before going to market
- Hospital systems implementing AI clinical decision support tools
- Telehealth platforms requiring real-time verification of AI-generated medical advice
- EHR providers looking to integrate safe AI features into existing systems
Additional Revenue Streams:
- Compliance consulting services - helping organizations navigate FDA approval processes for AI medical devices
- Audit and certification - providing third-party verification reports for regulatory submissions
- Premium features - advanced analytics, custom medical database integrations, and dedicated support
By positioning ourselves as the essential compliance layer between AI innovation and patient safety, we tap into a market where trust and regulatory approval are non-negotiable—making MedGuard AI not just valuable, but necessary for any AI-powered healthcare solution.


Log in or sign up for Devpost to join the conversation.