Project Story:
Inspiration
Adverse drug reactions are one of those problems that is very under-addressed, especially considering how common they are and how many people they affect. A lot of existing systems today either react too late or generate too many false alarms, which makes people stop trusting them altogether. We wanted to build something that doesn’t just say “this is risky,” but actually answers a more useful question: “Is what I’m experiencing normal for this drug, or should I be worried?” On top of that, we cared a lot about fairness. Health systems don’t always perform equally across age groups, genders, or populations, so we prioritized ensuring that our model doesn’t make assumptions.
What it does
We built a web-based platform that takes in user health information and gives the user insights into their health and risks that come with the drugs they are taking. It informs the user how common the side effects they are experiencing truly are.
Users input:
- Biological data (age, sex, etc.)
- Symptoms or side effects they’re experiencing
- The drug(s) they’re taking
- The condition being treated
The system then outputs:
- Health Score → a 0–100 score representing overall physiological stability
- Normality Score → how typical the reported side effects are for the given drug (adjusted for the underlying condition)
- Warnings → actionable alerts for:
- drug-drug interactions
- alcohol interactions
- pregnancy risks
Instead of overwhelming users with generic alerts, the system focuses on minimizing false alarms and providing signals that are actually useful.
How we built it
We approached this as a system with multiple layers:
Patient vulnerability modeling We first built a model that learns from patient health data (vitals, symptoms, etc.) to estimate how stable or unstable a patient is.
Continuous scoring system Instead of assigning fixed values to categories like “Low” or “High,” we compute a risk score (0–100) based on how far a patient’s vitals deviate from normal ranges. This makes the system more nuanced and realistic.
Confidence-aware predictions We added a confidence layer that measures how clearly the model can distinguish between outcomes and essentially how confident the model is in its prediction. Instead of blindly trusting predictions, the system adjusts outputs when uncertainty is high.
Final health scoring We combine physiological health with model confidence to produce a final health score that’s stable and easy to understand.
Drug + safety layer (frontend + logic) On top of the model, we built a web interface that integrates:
- drug-side effect knowledge
- interaction checks
- contextual warnings (like alcohol or pregnancy)
Challenges we ran into
One of the challenges was the model kept outputting 100 percent confidence for various patients, which is very unrealistic in healthcare. We tackled this by rethinking how we measure and represent confidence, and we decided to do an entropy-based confidence score that measures the spread of various predictions. Another challenge was false alarm control. It’s easy to build a system that flags everything as risky, but that defeats the purpose. One other challenge we faced was that the confidence score was considered too heavily at first in the final health score, so we balanced that out so the confidence score didn’t determine the health score.
What we learned
We learned that in healthcare AI, accuracy isn’t enough. A model can be 90%+ accurate and still be unusable if it’s overconfident, it produces too many alerts, or it can’t explain itself. We also learned how important it is to separate what the model predicts from how you present that prediction.
What's next for HScan.AI
Next, we want to:
- Integrate real-world pharmacovigilance data (like openFDA) to improve drug-specific predictions
- Add bias monitoring to continuously evaluate fairness across age, gender, and other groups
- Move toward a more clinician-ready interface with clearer explanations
Log in or sign up for Devpost to join the conversation.