AI-Powered Disease Prediction & Explainability

Inspiration

Healthcare AI models often make predictions, but without proper explainability, users struggle to understand why a model reached a particular decision. Our project bridges this gap by combining machine learning (ML) with explainable AI (XAI) to enhance disease prediction and provide insights into influential medical features.

What it does

  • Extracts medical features from PDF medical reports.
  • Uses Logistic Regression to predict disease likelihood.
  • Employs LIME framework to identify the top 5 influential features.
  • Converts LIME outputs into human-readable explanations using LLM.
  • Provides a user-friendly web application to streamline the process.

How we built it

  • Feature Extraction: NLP-based PDF text extraction (PyMuPDF, PDFMiner).
  • Machine Learning Model: Logistic Regression (Scikit-learn).
  • Explainability Framework: LIME.
  • Frontend: React.js for an intuitive interface.
  • Backend: Flask (Python) for processing.
  • LLM Integration: OpenAI’s GPT-based model for generating explanations.

Challenges we ran into

  • Extracting structured medical data from unstructured PDFs.
  • Ensuring the ML model provided accurate and interpretable results.
  • Fine-tuning LIME to generate meaningful feature importance scores.
  • Converting LIME’s output into clear, human-readable explanations.

Accomplishments that we're proud of

  • Successfully built a pipeline from feature extraction to explainability.
  • Created a transparent AI model that improves trust in medical predictions.
  • Developed an interactive UI for seamless user experience.
  • Integrated LLM-based explanations to bridge the gap between AI and users.

What we learned

  • The importance of explainability in AI-driven healthcare.
  • How LIME can provide transparency for ML predictions.
  • Efficient ways to process medical PDFs for structured analysis.
  • How to build an end-to-end ML + XAI pipeline for real-world applications.

What's next for AI-Powered Disease Prediction & Explainability

  • Improve feature extraction by integrating OCR for handwritten reports.
  • Enhance ML models with deep learning for better accuracy.
  • Expand LLM capabilities to provide more detailed medical insights.
  • Integrate real-time doctor consultations based on AI predictions.

Our project not only provides accurate predictions but also ensures that users understand the reasoning behind each decision. By integrating ML with explainability tools like LIME and LLMs, we create a trustworthy and user-friendly medical AI system.

Built With

Share this project:

Updates