Inspiration
Adverse drug interactions lead to millions of preventable medical complications annually. Most existing tools rely on static, rule-based systems that fail to capture the nuanced relationships between multiple drugs or patient-specific contexts. Inspired by AlphaFold’s success in modeling complex biological structures through deep reasoning, we set out to build a more intelligent, context-aware solution for detecting drug-drug interactions.
What it does
Dr. Ordinary is our baseline: a traditional drug interaction checker that mimics existing static tools, but adds a touch of convenience. Dr. Strange is the real innovation: a multi-agent, LLM-powered system that analyzes drug combinations using chain-of-thought reasoning, contextual patient data, and a gating network to weigh agent outputs. The final results are summarized into a readable, clinically useful interaction report.
How we built it
Used a modular agent framework (Letta) to reason about drug combinations and interactions
Built a frontend interface that accepts user input (drug list, patient context)
Developed multiple sub-agents for specific reasoning tasks (toxicity, metabolic competition, contraindications, etc.)
Introduced a gating network to weigh outputs from each agent
Added a summarizing agent to compile a final, human-readable report
Modeled our architecture loosely on AlphaFold’s iterative reasoning and relation modeling framework
Challenges we ran into
LLMs hallucinating or providing vague/conflicting interaction details
Balancing verbosity and accuracy in the summarizing agent’s outputs
Creating a robust API flow between agents without excessive latency
Designing the gating mechanism to properly weigh agent confidence
Mapping drug names and dosages across inconsistent user input
Accomplishments that we're proud of
Successfully replicated AlphaFold-like reasoning in a drug safety context
Built an end-to-end pipeline that outputs richer, smarter DDI reports
Created a modular agent-based architecture that is extensible and generalizable
Delivered readable explanations for complex drug risks — not just raw flags
What we learned
Chain-of-thought prompting significantly improves LLM reasoning in medical domains
- Agent specialization + a gating mechanism creates better decisions than a single monolithic model
- Patients and clinicians both benefit from human-readable AI explanations
- Iterative architecture (like AlphaFold) can be abstracted into other domains
What's next for Dr. Ordinary and Dr. Strange
Integrate more patient-specific parameters (e.g., renal function, allergies, vitals)
Expand beyond drug-drug to include drug-food and drug-condition interactions
Offer fine-tuned model variants for hospitals or pharmacists
Build EHR plugins and a mobile interface for real-time use in clinical environments
Allow feedback from users to refine and retrain the agent responses dynamically
Log in or sign up for Devpost to join the conversation.