Inspiration

My sister is a nurse. When I showed her what my AI tool was producing, she said it felt wrong. So I asked her what a real handover looks like. She described how every ward writes differently, surgery, paediatrics, adult wards, no standard, just paper and memory. That gap costs the US healthcare system $26 billion a year in preventable readmissions. 25% of those happen at a different hospital entirely because the information never travelled with the patient.

What it does

Care Transition MCP is a five-tool MCP server that pulls real patient FHIR data from the Prompt Opinion platform and generates a structured clinical handover, gaps flagged, medications reconciled, and readmission risk assessed. One prompt. Five tools fire. The receiving clinician gets a document they can actually act on.

How we built it

FastMCP 1.9.0 with SSE transport, deployed on Google Cloud Run. FHIR data comes from Po's workspace FHIR R4 endpoint. LLM is Groq API running LLaMA 3.3 70B. Five composable tools, each works independently, and any agent on the platform can use any one of them.

Challenges we ran into

When I ran the tool the first time, it worked, but something felt wrong. The AI was treating "received higher education" with the same clinical urgency as an active medical emergency. Like someone bursting into the room to tell you the building is on fire and the coffee machine needs descaling in the same breath. I fixed this by reading FHIR category codes and separating clinical conditions from social history before the LLM ever sees the data. Social factors moved to a dedicated discharge barriers section. The clinical picture leads.

BaseHTTPMiddleware is incompatible with SSE streaming. Starlette's convenience middleware buffers response bodies. SSE has no body; it streams indefinitely. Fixed with raw ASGI middleware.

FHIR headers arrive on the GET connection, not the POST tool call. Po sends patient context headers on the initial SSE handshake, but tool calls run as separate async POST requests. Fixed with Python ContextVars to propagate headers through the async chain.

Po reads capabilities.extensions, not capabilities.experimental. FastMCP outputs the wrong field name. Fixed by injecting the correct field through Pydantic's pydantic_extra.

Accomplishments that we're proud of

Getting the full FHIR context pipeline working end-to-end from Po injecting patient headers through SSE, captured by raw ASGI middleware, stored in ContextVars, and read inside async tool calls. That chain took the most debugging, but it's what makes the whole thing work cleanly.

Solving the SDoH Trap. Teaching the LLM to treat clinical conditions and social history as fundamentally different categories, not by filtering after the fact, but by separating them at the data layer before the model ever sees them.

Building five tools that each work independently. Any agent on the Po platform can use any one of them without needing the others. That composability was intentional from the start.

What we learned

The hardest part of healthcare AI isn't the model. It's teaching it to think like a clinician, to know that a missing medication on an ICU patient is a safety flag, not a clean record. It doesn't have to be perfect to save billions of dollars. It just has to be significantly more reliable than a hurried, smudged, handwritten note that gets lost in a hospital hallway.

What's next for Care Transition MCP

Real clinical data validation testing the tool against actual hospital FHIR endpoints beyond synthetic data to verify the category filtering holds up in production environments.

Recency weighting automatically deprioritises lab results and vitals older than 72 hours in ICU transitions, so the LLM never surfaces stale data as current clinical status.

Expanding the tool set an ICU-specific handover mode that explicitly tracks vasopressor weaning, ventilator status, and sedation holds. The current tools handle general transitions well. Critical care transitions need their own logic.

Built With

  • docker
  • fastmcp
  • fhir-r4
  • google-cloud-run
  • groq
  • llama-3.3-70b
  • prompt-opinion-sharp-extension
  • python
Share this project:

Updates