posted an update

Introduction

In this project, we intend to perform image segmentation with prostate Magnetic Resonance Imaging (MRI) data.

Prostate cancer is the second most frequent cancer diagnosis made in men and the fifth leading cause of death worldwide. [1] A few techniques are used for early detection of prostate cancer, including blood tests, biopsy and imaging tests. The Magnetic Resonance Imaging (MRI) scans create detailed images of soft tissues in the body using radio waves and strong magnets. MRI scans can give doctors a very clear picture of the prostate and nearby areas. [2]

MRI of prostate cancer usually consists of two non-overlapping adjacent regions: the peripheral zone (PZ) and the transition zone (TZ). An example of prostate MRI with labelled zones is shown in Figure 1. Identifying prostate zones is important for diagnostic and therapies. However, the identification work requires substantial expertise in reading MRI scans. Therefore, automatic segmentation of prostate zones is instrumental for prostate lesion detection.

The problem of prostate zone segmentation is challenging because of the lack of a clear prostate boundary, prostate tissue heterogeneity, and the wide inter-individual variety of prostate shapes. [3] In this project, we will be implementing some existing CNN and RNN models for image segmentation using prostate MRI data. We will use a survey for image segmentation using deep learning [4] as a guide, implement selected models and compare their performance.

challenges so far

One challenge is that the IoU of current FCN8s model is only 0.32 and the visualization shows that our model predicts “background” for all the pixels. Our next step is to try different learning rate and optimizers and epochs and consider using IoU based loss function for the FCN model. Another challenge is to implement more complicated models, including dilated convolutional model and RNN based model. So far, we are still reading the literature and trying to understand the model structure. As a backup, we implemented the DeconvNet model and will be comparing it with the U-Net model. We may also consider finding another CNN based models that might be easier to implement.

Results so far

Model Pixel Accuracy (PA) Mean Pixel Accyracy (MPA) Intersection over Union (IoU)
FCN (no aug) 0.961394 0.320465 0.333333
FCN (flip) 0.961394 0.320465 0.333333
FCN (rotation) 0.961394 0.320465 0.333333
FCN (both) 0.961394 0.320465 0.333333
--- --- --- ---
DeConvNet (no aug) 0.974025 0.619032 0.509451
DeConvNet (flip) 0.979059 0.705706 0.588672
DeConvNet (rotation) 0.977549 0.637209 0.554754
DeConvNet (both) 0.980162 0.698144 0.601530
--- --- --- ---
U-Net (no aug) 0.976238 0.718744 0.597541
U-Net (flip) 0.978152 0.741071 0.616579
U-Net (rotation) 0.979722 0.719528 0.613457
U-Net (both) 0.973854 0.740341 0.601107

Plan

Will try IoU based loss within this week. Will try to implement dilated FCN or RNN based model before this weekend.

Log in or sign up for Devpost to join the conversation.