Inspiration Healthcare is dangerously fragmented. Doctors are burning out in the EHR, billing teams are isolated in coding software fighting insurance denials, and patients are left confused by medical jargon on a web portal. We realized that the most critical clinical decisions often happen in messy, fast-paced General Chats among the care team. We were inspired to build an "invisible safety net" that sits inside these chats—an AI that catches fatal medical errors, secures revenue, and translates complex care plans into plain English, all with zero extra clicks for the doctor.

What it does Apex Multitasker acts as a Chief Medical Officer, Lead Medical Coder, and Patient Advocate all at once. Operating within the clinical chat, it actively monitors proposed treatment plans and performs a real-time, 3-part Master Audit:

🛑 Clinical Safety Check: It cross-references patient vitals against deep clinical guidelines, catching lethal drug interactions (e.g., prescribing Metformin to a patient with a low eGFR).

💰 Revenue & Billing Check: It flags zero-weight ICD-10 codes and auto-generates the exact Prior Authorization verbiage required to prevent expensive insurance denials.

🫂 Patient-Facing Translation: It strips away medical jargon and uses the "Ask Me 3" psychological framework to instantly write a 5th-grade reading level discharge summary for the patient.

How we built it We built the agent using the Prompt Opinion platform, heavily leveraging their Conversational Interoperability (COIN) and A2A standards. Instead of relying on basic LLM knowledge, we engineered a massive, highly structured RAG (Retrieval-Augmented Generation) Knowledge Base. We created three foundational "Bibles" as text files:

A Comprehensive Clinical Guideline matrix.

An Advanced Revenue Cycle & Coding rulebook.

A Health Literacy & Empathy protocol.

We then designed a highly restrictive God-Mode System Prompt that forces the agent to cross-reference the live chat against these specific files and output a strict, three-part Markdown audit.

Challenges we ran into Initially, our agent suffered from "Context Blindness." It would act like a polite, generic chatbot asking us for the patient's lab results instead of actively auditing the doctor's plan. We had to redesign our data-ingestion strategy to successfully fuse the patient's EHR context (like A1C and eGFR levels) directly with the physician's chat prompt. Once we dialed in the context window and forced the RAG constraints, the agent stopped acting like an assistant and started acting like an apex auditor.

Accomplishments that we're proud of We are incredibly proud of successfully balancing three completely different domains—clinical safety, financial compliance, and human empathy—within a single AI workflow. During testing, we successfully created "medical traps" (like prescribing an NSAID to a patient with stage 4 kidney disease) and watched our agent instantly catch the lethal error, fix the billing code, and write a warm note to the patient in under two seconds.

What we learned We learned that the true value of AI in healthcare isn't just about generating text; it is about preventing bad actions. We had to dive deep into the real-world mechanics of medical coding, learning how much revenue hospitals lose simply because a doctor forgot to type a specific Prior Authorization keyword. We also learned that highly structured, locally hosted text files can turn a standard LLM into an absolute subject-matter expert.

What's next for Apex Multitasker We want to evolve this from a single God-Mode agent into a true Multi-Agent Swarm. Instead of one agent doing everything, we envision dedicated sub-agents (a Pharmacy Agent, a Coding Agent, and an Empathy Agent) that debate a doctor's proposed plan via A2A protocols before presenting the final, perfected Master Audit. We also plan to integrate live, bidirectional FHIR API connections so the agent can pull directly from Epic or Cerner without needing manual context drops.

Built With

Share this project:

Updates