Breast cancer is a global health concern that affects millions of lives each year. Early detection and accurate diagnosis are critical in improving patient outcomes, and advancements in medical technology are playing a pivotal role in the fight against this disease., we'll delve into the inspiring world of breast cancer segmentation models and how they are transforming the landscape of medical imaging and patient care.

Breast cancer segmentation is a complex task that involves identifying and delineating cancerous regions within medical images, such as mammograms and ultrasounds. Accurate segmentation is essential for diagnosis, treatment planning, and monitoring disease progression. However, manual segmentation is time-consuming and can be prone to human error. Deep learning models are providing a solution to this challenge.

Segmentation helps pathologists in making precise diagnoses by providing clear images of cancer cells, which is crucial for determining the stage of cancer. This process involves the identification and isolation of cancerous cells from non-cancerous cells and the surrounding tissue in images obtained from breast tissue biopsies. The goal is to accurately define the boundaries of the malignant cells to assess the progression and potential aggressiveness of the cancer.

What it does⚙️

These containers are designed to provide the tools to help segment the tumor cells out of another. This tool can help the doctor to locate the tumor cell. We have provided the interface for users who would like to segment the cell images and we also provide the fast api container for the developer who would like to implement our api.

Our innovation aims to

  1. Facilitate doctors or people in this field to determine breast cancer more easily.

  2. Facilitate developers who are interested to implement our api to their development.

How we built it🛠️

This project has separate into 4 parts which are

  1. Model training
    This part uses Pytorch to train the segmentation model then save it as .pt files for further usage.

  2. Backend
    This part uses FastAPI which loads the model from the previous part and receives .png images from frontend to predict the tumor area. After finishing training it will send the result back to frontend.

  3. Fronted
    This part uses React and Bootstrap to visualize the interface of a website. In this part, we created the browse button to upload the file and send it to the backend. After the backend finishes the segmentation processing, the frontend will receive the result and display it on screen.

  4. Docker
    In this part, we have built the container for backend and frontend and pushed them to docker hub. To use the website, we also created docker-compose.yml file to run both containers.

How to install image 🐳


  1. Use this following code to pull the image FastAPI Container

docker pull mintyani/hackathon-server:latest

  1. To run the image using this command

docker run -p 8000:8000 mintyani/hackathon-server:latest

  1. Now, you can access our ml document by going to https://localhost:80000/docs

From this document, you can try to post the image to see the segment result, which we provided 2 models which are resnet50_unet and vgg16_unet

Web Interface

  1. Use the following code to pull the image React Container

docker pull parindapannoon/hackathon-frontend:latest

  1. To run the image using this command

docker run -p 3000:3000 parindapannoon/hackathon-frontend:latest

  1. Now, you can access our web interface by going to https://localhost:3000

Full website

  1. Download docker-compose.yml file from our GitHub

  2. Run this command:

docker-compose up

  1. Now, you can access our web by going to https://localhost:3000

Challenges we ran into⌛

  1. Lack of computer storage, sometimes the docker desktop crashes because the storage is full.

  2. The model takes a lot of time to train. so, the colab always crashed.

  3. The model takes some time to predict. especially, the first time that we have to pull the image from docker hub.

Accomplishments that we're proud of👏

  1. We have developed a website that can segment the areas of a tumor from user input.

  2. We have created docker containers and push them to public

  3. We have created docker-compose to pull and run the docker that we push public

  4. We have created the website that frontend communicates with backend

What we learned✏️

  1. Learning how to train the segmentation models

  2. Learning to use FastAPI

  3. Learning to implement the model in the production

  4. Learning how to use docker, docker-compose and how to build a container

  5. Learning to create frontend with React and bootstrap

  6. Learning to send data using API between React and FastAPI

What's next for Segment life saver📈

Next implementation:

  1. Add more models to the system

  2. Improve the model to be more precise

  3. Improve the segmentation to be semantic segmentation that can segment not only tumor cells but also the other cells around them too.

  4. Add load balancer to the system, to ensure availability and reliability

Built With

Share this project: