Orama: An AI-Powered EHR Assistant for Predictive Clinical Insights
1. Introduction
Healthcare providers are inundated with patient data, from unstructured clinical notes to lengthy problem lists. This overwhelms clinicians—leading to potential oversight of important details, longer documentation times, and increased burnout. Orama tackles this challenge by offering an AI assistant that works alongside existing EHR workflows to analyze patient notes, highlight key information, and suggest relevant care considerations. Our solution aims to reduce errors, save time, and enhance clinical decision-making by harnessing large language model (LLM) capabilities directly within a web-based EHR environment.
Key Features at a Glance
- Summaries & Alerts: Dynamically summarizes a patient’s chart and flags potential red flags (e.g., overdue preventive care, suspicious lab results).
- Consultation Prep: Offers focused questions and reminders for upcoming visits to ensure more efficient and thorough consultations.
- Treatment Predictions: Uses an LLM-driven prompt to discuss likely outcomes or adherence levels, empowering clinicians to tailor care plans.
- Medication Adherence & Follow-Up: Consolidates medication history, compliance status, and next steps, all in one convenient panel.
2. Technical Approach
2.1 System Architecture
EHR Integration (SMART on FHIR)
- We leverage a SMART on FHIR login flow, so clinicians can securely launch Orama from their existing EHR environment.
- For the hackathon, we use mock data in a MeldRx workspace (see https://app.meldrx.com) to simulate a real EHR environment.
- Our code retrieves
Patient,DocumentReference, and other FHIR R4 resources, then processes them for AI analysis.
AI Analysis via LLM
- Orama communicates with a Large Language Model (currently Claude 3.5 Sonnet via OpenRouter) using carefully designed prompts.
- No fine-tuning is done; we rely on prompt engineering to shape the LLM’s output into structured JSON that’s displayed in the UI.
- Our approach is model-agnostic—institutions can swap in a specialized or proprietary healthcare model later on.
Web Application (Future Browser Extension)
- The user interface is built in SvelteKit with Tailwind CSS for quick iteration and a clean layout.
- Currently, it operates as a web-based tool launched via SMART on FHIR so clinicians can access AI summaries within their normal EHR workflow.
- Future Option: We anticipate packaging Orama as a browser extension if desired, to overlay on other web-based EHRs with minimal IT overhead.
2.2 Data Flow
- User Logs In: Through SMART on FHIR authorization, we get a patient context (e.g.,
patientId). - Data Retrieval: The app requests relevant documents or resources (
DocumentReference) from the FHIR endpoint. - Prompt Generation: We bundle patient notes or summary data, then send them to the LLM with a specialized “system prompt” guiding the structure (e.g., conditions, follow-up care, medication adherence).
- AI Response: The LLM returns a JSON-based analysis or consultation prep plan.
- UI Display: Our front-end components parse and visualize the AI output with badges, color-coded alerts, and collapsible sections.
- Optional Interaction: The clinician can submit chat-like queries or ask follow-up questions (e.g., “What were the last two cardiology recommendations?”).
3. B11 Criterion and FAVES Compliance
3.1 What is B11 (Decision Support Interventions)?
- The ONC 170.315(b)(11) DSI criterion mandates that EHR systems offering predictive or AI-based decision support must provide transparency about their sources, rationale, and risk management. See the ONC Final Rule for more information.
3.2 Our Alignment with B11
Source Attributes
- Developer & Funding: We openly disclose that the AI model is from Anthropic Claude 3.5 Sonnet via OpenRouter. This can be swapped for specialized medical LLMs as needed.
- Intended Purpose: The tool summarizes clinical documents, flags important details, and assists with follow-up planning.
- Training Data & Limitations: Currently, no direct fine-tuning on private PHI. The underlying model was trained on general medical text up to a certain year and may not reflect the latest guidelines or local protocols.
FAVES (Fair, Appropriate, Valid, Effective, Safe)
- Fair: We aim to reduce bias by only summarizing the text; we do not add demographic-based predictions. Future versions can incorporate bias testing with diverse patient data.
- Appropriate: Orama is a co-pilot—it does not replace clinical judgment. Warnings appear whenever the model’s confidence is low or the data is incomplete.
- Valid: We only claim the model’s reliability within known test sets. We encourage real-world validation or pilot testing to confirm accuracy.
- Effective: The solution references documented improvements in time savings and error avoidance (see Section 4 for details).
- Safe: The AI does not independently modify EHR data. Clinicians remain in control, and we log any suggestions for review.
Risk Management
- We run the LLM in a locked-down environment, ensuring minimal risk of PHI leakage.
- The assistant provides references to the source documents it used so the clinician can verify.
- Any “red-flag” alerts (e.g., critical labs) are accompanied by disclaimers that a human must confirm.
4. Potential Impact & Creativity
4.1 Reducing Errors & Improving Safety
- Information Overload: Studies show that EHR overload contributes to missed labs and delayed diagnoses (J Patient Safety, 2022).
- Real-World Case: An OB/GYN missed a positive BRCA result due to EHR clutter, resulting in a delayed cancer diagnosis (MedPro Case Study). Our assistant flags important findings so they won’t be accidentally scrolled past.
4.2 Saving Clinician Time
- Documentation Burden: Some physicians spend up to 90% of their time in the EHR on complex days (Surgeon General’s Address, 2022).
- AI Scribing: Tools like Nuance DAX have shown 30% reduction in documentation time (Medical Economics, 2021). We follow a similar principle, auto-summarizing and drafting notes so clinicians can finalize quickly.
4.3 Creative Differentiation
- Vendor-Agnostic: Many AI solutions are vendor-locked (e.g., Epic-specific). We’ve designed a flexible web-based approach that integrates with any EHR using SMART on FHIR, democratizing AI for smaller practices or non-major EHR systems. In the future, we may package Orama as a standalone browser extension.
- Conversational Summaries: Instead of static pop-up alerts, we provide a conversational interface that merges chart data, guidelines, and user queries in real time.
5. Scalability & Future Roadmap
5.1 Web-Based Distribution
- Future Browser Extension: While Orama currently runs as a standard SMART on FHIR web app, we plan to offer an optional browser extension format. This would let providers “plug & play” with minimal IT overhead, especially for smaller or non-major EHR systems.
5.2 Adaptability
- Model Swapping: Institutions can host their own large language model (fine-tuned on local data) behind a secure API. Orama’s architecture seamlessly supports that swap.
- Further Integration: For deeper clinical decision support, we can tie into medication databases or domain-specific guidelines. Our next iteration may embed official guidelines for comorbidities (e.g., CKD + diabetes management) to give more precise recommendations.
5.3 Competitive Landscape
- Epic’s Partnerships: E.g., with Nuance, Suki. Typically closed to Epic customers only.
- Regard: More specialized, EHR-integrated approach.
- Our Unique Edge: Lightweight, universal overlay with LLM-based analysis. Quick adoption for smaller clinics that might never license a big vendor’s AI add-on.
6. Documentation Thoroughness & Transparency
We have prioritized clear, thorough documentation that addresses ONC guidelines for transparent AI. Here’s how:
- Model Disclosures: We disclaim that no patient data is permanently stored in the AI service. All data is ephemeral and used solely for generating the requested summary.
- Limitations:
- The AI may “hallucinate” or infer incorrectly if the original notes are ambiguous.
- Model knowledge cutoff (the LLM might not know brand-new clinical guidelines).
- Not intended to replace professional medical judgment.
- User Control: Physicians can dismiss or override any suggestion. The extension is non-blocking and aims to reduce alert fatigue.
- Versioning & Updates: We plan to version each iteration of the assistant, so changes in model or logic are traceable. This meets “source attributes” transparency (part of B11).
7. UI/UX for Physicians
- High-Level Summaries: Quick bullet points on patient conditions, recent events, overdue follow-ups.
- Consultation Prep: A question bank (e.g., “Medication Management,” “Pain Assessment,” “Preventive Care”) with short rationales.
- Non-Intrusive Design: Minimizes pop-ups. Instead, places a neat sidebar for the user to consult as needed.
- Highlighting & Source Link: The system can highlight which part of the note triggered a given summary or recommendation.
8. Conclusion
Orama exemplifies a modern, B11-compliant approach to AI-driven clinical decision support: an agile, user-friendly layer that can significantly reduce EHR burden and augment physician awareness. By surfacing critical details, suggesting relevant questions, and summarizing a patient’s complex chart, Orama fosters both time savings and improved patient safety.
- Uniquely Vendor-Agnostic: Currently deployable as a web-based SMART on FHIR app, suitable for practices large and small. In future releases, Orama could also be packaged as a browser extension for even easier adoption.
- Thorough Documentation & Transparency: Meets ONC’s FAVES expectations by explaining its data sources, disclaimers, and risk management.
- High Potential Impact: Even saving 2–3 minutes per patient encounter scales to thousands of hours across a busy clinic, reducing clinician burnout and helping avoid missed details that can harm patients.
With these strengths and a plan to integrate feedback, Orama is poised to be a competitive hackathon entry that could readily evolve into a real-world clinical tool.
We look forward to hearing feedback from clinicians, patients, and hackathon judges. If you have questions on any part—technical, regulatory, or user experience—please let us know. Together, we can continue refining Orama into a robust clinical co-pilot that truly makes healthcare smarter.
Built With
- fhir
- oauth
- openrouter
- skeletonui
- sveltekit
- tailwind
- typescript


Log in or sign up for Devpost to join the conversation.