ALICE — Automated Lifecycle for Insurance & Clinical Evidence

Inspiration

We built ALICE around a problem that feels very real in healthcare: prior authorization is rarely blocked by a lack of clinical intent, but by fragmentation. The evidence exists, the medication history exists, the policy criteria exist — they just live in different places, in different formats, and are difficult to assemble into something decision-ready.

That made this hackathon especially compelling. The challenge was not just to build another healthcare demo, but to show how interoperable agents can work together using standards like MCP, A2A, and FHIR. That lined up almost perfectly with the problem we wanted to solve: turning scattered insurance and clinical evidence into a structured, explainable prior authorization workflow. :contentReference[oaicite:0]{index=0}

ALICE came out of that idea. We wanted to build something that felt less like a static form filler and more like a coordinated system: one agent reconciling medications, another assembling evidence, another composing a standards-based packet, and another evaluating policy logic and supporting appeals. There is a lot of technical interest in healthcare AI right now, but what inspired us here was the “last mile” problem — getting from raw information to an actionable deliverable. That is also central to the hackathon itself. :contentReference[oaicite:1]{index=1}

What It Does

ALICE is an interoperable, agent-based prior authorization workflow for specialty medication use cases.

At a high level, ALICE takes synthetic patient data and runs it through a multi-step pipeline:

  1. reconcile medications and patient context,
  2. assemble the clinical evidence needed for review,
  3. compile that evidence into a FHIR-based prior authorization packet,
  4. evaluate the request against payer policy rules, and
  5. generate an audit trail and, when needed, an appeal workflow.

In our MVP, ALICE focuses on workflows like medication reconciliation, evidence assembly, prior authorization packet generation, payer decisioning, and appeal support. The output is not just a yes/no result. It is a structured, traceable decision process with artifacts that can be reviewed, explained, and extended.

A major part of the design is that this is not built around real patient data. The demo uses synthetic data only, which let us focus on interoperability, explainability, and workflow design without introducing PHI risk.

How We Built It

We built ALICE as a modular TypeScript application with a clear separation between agent logic, policy logic, FHIR resource handling, and the demo interface.

The system is structured around a few key ideas:

Agent orchestration

ALICE uses a multi-agent pattern where specialized agents handle distinct responsibilities. In the repo, those responsibilities include:

  • medication reconciliation,
  • evidence assembly,
  • packet composition,
  • decisioning,
  • audit logging, and
  • appeal generation through ARIA.

Instead of one monolithic workflow, we treated the problem as a sequence of interoperable handoffs. That made it much easier to reason about what each step was responsible for and how evidence moved through the system.

Standards-first data model

FHIR is the backbone of the workflow. We modeled patients, conditions, observations, medication history, coverage, document references, and generated packet outputs as FHIR resources or FHIR-compatible artifacts. That matters because the hackathon specifically emphasizes healthcare-ready interoperability and FHIR-based context propagation through SHARP-style workflows. :contentReference[oaicite:2]{index=2}

We also aligned the project to the challenge’s focus on MCP and A2A. The hackathon allows teams to build either MCP-powered “superpowers” or A2A-powered agents within the Prompt Opinion ecosystem, and ALICE is designed to demonstrate that style of interoperability directly. :contentReference[oaicite:3]{index=3}

Policy compilation

A core design choice was to treat insurance criteria almost like a compiler problem. Raw clinical and coverage inputs are parsed, normalized, and transformed into a structured decision context. Then policy rules are evaluated against that context to determine whether requirements are met.

That framing helped us move from “showing data” to “compiling evidence.” It also made the system easier to explain: ALICE is not trying to replace clinical judgment. It is trying to assemble the right evidence, in the right structure, so a prior authorization decision can be made more consistently.

Explainability and auditability

One of the strongest parts of the project is that every major step leaves a trail. ALICE records audit events, agent handoffs, data source attribution, and outputs that can be surfaced back to the user. That was important to us because in healthcare, automation without traceability is hard to trust.

We wanted the workflow to answer not only “what was the decision?” but also “why did the system reach it?” and “what evidence was used?”

Challenges We Faced

Working with healthcare complexity without overfaking it

One challenge was building something realistic enough to feel like healthcare software, while still keeping it hackathon-sized. Prior authorization touches policy logic, medication history, lab thresholds, documentation quality, appeals, and compliance expectations. It would have been easy to oversimplify the problem into a toy demo.

We tried to avoid that by making the workflow structured and multi-step, even in MVP form.

Balancing standards with speed

The hackathon is explicitly about interoperable healthcare agents using MCP, A2A, and FHIR through the Prompt Opinion platform. That is exciting, but it also means there is architectural pressure: you are not just building a clever AI feature, you are building something that should fit into a standards-based ecosystem. :contentReference[oaicite:4]{index=4}

The challenge for us was balancing speed with fidelity. We wanted ALICE to feel like an actual interoperable system, not just a UI mockup with AI text generation attached.

Making the workflow explainable

A denial or approval by itself is not very useful if nobody can tell how it happened. One of the hardest parts was keeping the pipeline understandable as it grew. Every new feature — note extraction, evidence packaging, decisioning, appeals — increased the need for auditability.

That pushed us to invest in structured logs, explicit agent boundaries, and human-readable reasoning rather than just returning final outputs.

Keeping the demo safe and practical

Because this is healthcare-adjacent, we were careful not to frame the project around real patient data. ALICE is a synthetic-data MVP. That constraint actually helped the project: it kept our focus on workflow interoperability, policy reasoning, and standards-based packaging instead of data access.

What We Learned

This project taught us that interoperability is not just a standards checkbox. It is a design discipline.

We learned that when a workflow is broken into smaller agents with clear responsibilities, the system becomes easier to debug, explain, and extend. We also learned that FHIR becomes much more useful when it is not just being stored, but actively used as the shared language between components.

More broadly, we learned that healthcare AI becomes much more credible when it produces structured artifacts rather than just conversational output. A good answer from a model is helpful. A standards-based packet, a traceable decision path, and an audit trail are much closer to something a real healthcare system could trust.

Why This Fits the Hackathon

ALICE was built to match the spirit of this challenge closely.

The Devpost brief asks participants to build healthcare AI solutions that integrate with the Prompt Opinion platform and demonstrate interoperability through MCP, A2A, and FHIR, with an emphasis on real workflow value, feasibility, and healthcare context propagation. :contentReference[oaicite:5]{index=5}

That is exactly the space ALICE is trying to occupy:

  • a healthcare-specific agent workflow,
  • structured around interoperable components,
  • grounded in FHIR artifacts,
  • designed for explainability,
  • and aimed at a concrete pain point: prior authorization.

Rather than treating the hackathon as a generic “AI in healthcare” exercise, we used it to explore how specialized agents can coordinate around one high-friction administrative workflow and produce something more useful than a single chatbot response.

Future Work

If we continued developing ALICE, the next steps would be:

  • deeper integration with Prompt Opinion publishing and invocation flows,
  • broader policy support across more medication classes,
  • stronger document ingestion from clinical notes,
  • more robust appeal and rebuttal generation,
  • and tighter compliance checks around payer and regulatory requirements.

We would also want to improve the transition from demo logic to production-grade policy management, especially around versioning, provenance, and clinical review.

Closing

ALICE started with a simple frustration: too much of prior authorization is administrative assembly work rather than clinical reasoning. We wanted to build something that reduces that friction by treating the process as a coordinated, standards-based pipeline.

For us, the heart of the project is this: the goal is not to automate healthcare decisions blindly. The goal is to assemble the right evidence, preserve the context, and make the workflow easier to understand, justify, and act on.

That is why ALICE felt like a good fit for this hackathon. It is not just an AI demo. It is our attempt to show what interoperable healthcare agents can look like when they are built around a real operational problem.

Share this project:

Updates