Inspiration

Considering the difficulties faced with current medical learning tools using flashcards, generic quizzes, and AI chatbots, we realized healthcare students could recite drug classes for Type 2 Diabetes — but when given a real patient, there is a lack of ability to transfer the knowledge to clinical settings.

Knowing that:

  • Metformin reduces hepatic gluconeogenesis
  • SGLT2 inhibitors reduce heart failure hospitalizations
  • GLP-1 receptor agonists promote weight loss is very different from deciding:

“What is the safest and most effective medication for this patient?”

Most existing tools focus on recognition (multiple choice, flashcards) rather than generation under constraints. But prescribing is not about selecting the right answer from four options — it’s about weighing tradeoffs.

We wanted to build something that teaches learners to think like clinicians, not test-takers.

What it does

DiaLogics is a choice-driven clinical reasoning game that trains healthcare learners to select medications for patients with Type 2 Diabetes under realistic clinical constraints.

Instead of testing recognition with multiple-choice questions or providing answers through a chatbot, Rx Rounds places learners in the role of a clinician managing patient cases. In each round, players are given a realistic patient profile (including A1C, kidney function, cardiovascular disease, heart failure risk, weight considerations, and hypoglycemia risk) and must:

  • Choose one medication to start
  • Choose one medication to avoid
  • Justify their clinical reasoning

The system then simulates the clinical consequences of their decisions, including glycemic response, adverse effects, and cardiorenal outcomes. Learners receive immediate, structured feedback explaining why their choices were beneficial or harmful.

To reflect real-world prescribing, DiaLogic introduces dynamic clinical constraints (such as chronic kidney disease, heart failure, cost barriers, or high hypoglycemia risk) that force learners to adapt their strategy. This builds flexible reasoning rather than rote memorization.

Over multiple rounds, learners unlock fact cards only by demonstrating correct reasoning, reinforcing retrieval practice and elaboration. The game culminates in a final mastery case where learners must design a complete treatment plan for a complex patient and explain their decisions.

In short, DiaLogic transforms passive pharmacology study into active clinical judgment training — helping learners practice how to safely choose diabetes medications for real patients.

How we built it

Step 1: Mapping Clinical Logic

We began by identifying the core decision variables in Type 2 Diabetes prescribing:

  • A1C
  • Body Mass Index
  • eGFR
  • Presence of ASCVD
  • Heart failure
  • Hypoglycemia risk
  • Cost barriers

Instead of organizing the system around drug classes, we organized it around patient constraints.

This was our first intentional design decision: The learner must reason from patient factors → to medication choice.

Step 2: Designing the Decision Loop

We structured each round into four phases:

  • Case Reveal
  • Decision Commitment
  • Constraints Given
  • Outcome & Feedback

Learners must: Select one medication to start Select one medication to avoid Justify their reasoning

This enforces retrieval practice and elaborative encoding.

Step 3: Building the Rule Engine Rather than using a generative AI to decide outcomes, we built a deterministic logic framework grounded in curated clinical data.

For example: If patient has HF → SGLT2 inhibitors gain benefit modifier If eGFR < threshold → metformin restricted If high hypoglycemia risk → sulfonylurea penalty

The system simulates clinical tradeoffs rather than simply marking answers correct/incorrect.

Conceptually, you can think of each medication having a value function: Clinical Utility=Glycemic Benefit−Risk Penalty+Comorbidity Modifier This allows decisions to reflect realistic tradeoffs rather than binary grading.

Challenges we ran into

  1. Avoiding “Just Another Quiz” We had to remove multiple choice entirely. This was uncomfortable at first because quizzes are easier to build — but they do not train prescribing generation.

  2. Balancing Realism with Simplicity Real prescribing is complex. We needed enough realism to feel authentic, but not so much that it overwhelmed working memory.

  3. Ensuring Clinical Safety We avoided using generative AI for decision-making. All outcomes are based on curated datasets and rule logic to prevent hallucinations.

Accomplishments that we're proud of

Trying out something new and building a clinical reasoning simulator solo, with limited coding experience. :)

What we learned

As we researched learning science, we discovered that strong clinical reasoning depends on several key cognitive principles:

  1. Retrieval Practice Actively generating an answer strengthens memory pathways more than re-reading or recognizing answers.

  2. Cognitive Load Theory Working memory is limited. When learners are overwhelmed with too many drug options at once, reasoning quality declines.

  3. Error-Based Learning Making a mistake and receiving corrective feedback produces stronger encoding than passive review. We realized that if we wanted to improve medication selection, the system architecture itself had to enforce these principles.

What's next for DiaLogic

We see this framework scaling to:

  • Hypertension management
  • Heart failure therapy optimization
  • Chronic kidney disease

A digital adaptive version could:

  • Track learner error patterns
  • Personalize case difficulty
  • Integrate spaced repetition
  • Provide performance analytics

Long term, we envision a clinical reasoning engine that trains learners to move from: Knowledge→Judgment→Safe Practice

Built With

Share this project:

Updates