Inspiration
Cardiovascular disease is still the leading cause of death worldwide, yet most risk detection happens only after serious symptoms appear. During our research, we noticed that many existing tools are either difficult to understand, expensive to deploy, or operate like black boxes that don’t explain why a patient is at risk.
We wanted to change that. CardioLens AI was inspired by a simple question: What if early heart disease risk detection was transparent, accessible, and usable even without advanced medical infrastructure?
What it does
CardioLens AI is an interpretable, privacy-first AI system that helps identify cardiovascular risk early using everyday clinical data.
It allows users to:
Assess individual cardiovascular risk in real time
Understand why a risk score is high or low through explainable AI
Analyze uploaded medical reports using NLP
Explore population-level trends and risk patterns
Generate simplified, patient-friendly reports
By combining prediction, explanation, and usability, CardioLens AI acts as a decision-support tool, not a black-box diagnosis system.
How we built it
We built CardioLens AI using real, de-identified biomedical data and a carefully designed machine learning pipeline.
Our system uses:
Logistic Regression for interpretability
Random Forest for capturing complex risk patterns
SMOTE to handle class imbalance responsibly
Fairness analysis across age and gender groups
Calibration curves to ensure meaningful risk probabilities
On top of the AI engine, we developed an interactive Streamlit web application with modules for assessment, analytics, report analysis, and visualization all running locally to preserve privacy.
Challenges we ran into
One of our biggest challenges was balancing model performance with interpretability. More complex models improved accuracy but risked becoming opaque, which isn’t acceptable in healthcare.
Another challenge was ensuring fairness. We had to carefully evaluate whether the model behaved consistently across different age and gender groups and avoid unintentional bias.
Finally, designing an interface that worked for both clinicians and non-technical users required multiple iterations to keep things simple without losing medical depth.
Accomplishments that we're proud of
Built an end-to-end healthcare AI system, not just a model
Integrated explainable AI so every prediction can be understood
Added fairness and calibration analysis for ethical reliability
Designed a privacy-first, local-execution architecture
Created a realistic demo with medical report analysis and downloadable outputs
Most importantly, we built something that feels clinically meaningful, not just technically impressive.
What we learned
This project taught us that healthcare AI is not just about maximizing accuracy. Trust, transparency, and usability matter just as much.
We learned how small design decisions like how results are explained or how data is handled — can dramatically affect whether a system is safe and useful in real-world healthcare settings.
We also gained hands-on experience in building responsible AI systems from data preprocessing all the way to deployment.
What's next for CardioLens AI
Next, we plan to:
Validate CardioLens AI on larger and more diverse datasets
Improve NLP capabilities for more complex medical documents
Add longitudinal risk tracking for disease progression
Integrate wearable and lifestyle data
Explore clinical collaborations for real-world evaluation
Our long-term vision is to make early, explainable cardiovascular risk screening accessible to anyone regardless of location or resources.
Built With
- ai
- ai)
- colab
- explaonable
- imbalanced-learn
- jupyter
- metplotlib
- natural-language-processing
- notebook
- numpy
- pandas
- python
- reportlab
- scikit-learn
- seaborn
- shap
- smote)
- streamlit
Log in or sign up for Devpost to join the conversation.