Inspiration

I’ve always been curious about how real-world AI products are built—not just the model, but the entire process from data to deployment. Through this hackathon, I wanted to challenge myself to learn end-to-end machine learning deployment. Kidney disease classification felt like a meaningful and impactful application to explore while learning how to make a complete ML project production-ready.

What it does

Our project takes raw kidney disease datasets and processes them through a VGG16-based convolutional neural network to classify whether a patient has chronic kidney disease. The end goal is a full-stack deployable solution that provides quick predictions via a clean UI.

How we built it

We built the project using:

  1. Python, TensorFlow, and Keras for the deep learning model (VGG16)
  2. Pandas, NumPy, and Matplotlib for data preprocessing and visualization
  3. YAML for config-driven modular pipelines
  4. DVC (planned) for versioning data and models
  5. Docker/AWS (planned) for final deployment
  6. GitHub for collaboration and version control

Challenges we ran into

  1. Encountered versioning issues and large file push errors while using GitHub
  2. Syntax bugs during early pipeline structuring
  3. Limited time to implement full Docker/AWS pipeline during the hackathon
  4. Model training required tuning and cleanup due to dataset imbalance

Accomplishments that we're proud of

  1. Successfully implemented a modular and scalable ML pipeline
  2. Clean separation of config, code, and data to allow easy experimentation
  3. Got the initial VGG16 model working with decent accuracy
  4. Planned for production-grade deployment using modern tools

What we learned

  1. How to build modular ML pipelines using OOP, YAML configs, and custom logging
  2. Basics of DVC and MLOps practices
  3. Hands-on experience debugging Git large file errors
  4. Improved our skills in deep learning and project development

What's next for Kidney-Classification-Project

  1. Integrate DVC for full version tracking of datasets and models
  2. Build a streamlit/gradio frontend for real-time predictions
  3. Dockerize the application for smoother deployment
  4. Deploy the solution using AWS EC2 or S3
  5. Improve the model using hyperparameter tuning and experimenting with different architectures

Built With

Share this project:

Updates