Inspiration

Alzheimer’s disease often develops silently for years before symptoms become clinically visible. Early intervention can significantly slow progression, but existing diagnostic tools are either expensive, invasive, or lack transparency. This motivated me to build an interpretable AI system that can assist early risk identification using accessible clinical data while remaining trustworthy for real-world clinical use.

What it does

This project predicts early Alzheimer’s disease risk and progression using machine learning models trained on clinical and cognitive assessment data. Unlike black-box approaches, the system uses SHAP-based explanations to clearly show how individual features contribute to predictions, enabling transparent and interpretable decision support.

How I built it

I developed a full machine learning pipeline in Python using Google Colab. The workflow includes data preprocessing, median imputation, robust feature scaling, model training with Logistic Regression and Random Forest, and performance evaluation using accuracy, ROC-AUC, and confusion matrices. SHAP explainability is integrated to interpret feature-level contributions.

Challenges I ran into

Clinical datasets contain missing values, outliers, and class imbalance. Ensuring numerical stability while maintaining interpretability required careful preprocessing and model selection. Handling infinite values and making the notebook fully reproducible were also key challenges.

Accomplishments that I'm proud of

I successfully built a reproducible, interpretable AI pipeline that balances predictive performance with clinical transparency. The integration of SHAP explanations makes the model suitable for real-world healthcare decision support.

What I learned

This project strengthened my understanding of explainable AI, clinical data preprocessing, and the importance of interpretability in healthcare-focused machine learning systems.

What's next

Future improvements include validating the model on external datasets, adding longitudinal progression modeling, and integrating a clinician-friendly dashboard for real-time interpretation.

Built With

Share this project:

Updates

posted an update

Demo Video Added

A short demo video has been added showing:

  • Notebook execution
  • Model training and evaluation
  • SHAP explainability visualizations
  • How predictions can support early clinical decision-making

This demonstrates how interpretable AI can be applied responsibly in healthcare.

Log in or sign up for Devpost to join the conversation.

posted an update

Project Launched: Interpretable AI for Early Alzheimer’s Prediction

I’ve successfully built and released an end-to-end interpretable machine learning pipeline for early Alzheimer’s risk and progression prediction using clinical data.

Key highlights:

  • Robust preprocessing with imputation and scaling
  • Interpretable baseline and ensemble ML models
  • SHAP-based explainability for transparent predictions
  • Fully reproducible Google Colab notebook
  • Demo walkthrough showing real-time model outputs

This project focuses on trust, transparency, and real-world clinical relevance.
Feedback and suggestions are very welcome!

Log in or sign up for Devpost to join the conversation.