Inspiration

Brain tumour is one of the most dangerous diseases which require early and accurately detection methods. And brain tumour is a cancerous or non-cancerous mass or growth of abnormal cells in the brain. Now most detection and diagnosis methods depend on decision of neurospecialists, and radiologist for image evaluation which possible to human errors and time consuming. The main purpose of this project is to build a robust CNN model that can classify if the subject has a tumour or not based on Brain MRI scan images with an acceptable accuracy for medical grade application

What it does

To implement this project we are using 4 different images and this images are called as FLAIR, T1, T2 and T1CE and the label segmented image. The multi-institutional dataset, acquired from 19 different contributors, contains multimodal MRI scans of each patient, namely T1, T1 contrast-enhanced (T1ce), T2-weighted (T2), and Fluid Attenuated Inversion Recovery (FLAIR), from which the tumour sub regions are segmented. The data is processed to overcome discrepancies such that they are skull-stripped

How we built it

To automate brain tumour segmentation process author is combining both 3D CNN and UNET algorithms as deep learning is gaining popularity in efficient semantic segmentation of medical images. To further enhance segmentation process author is using combination or ensemble of two deep learning algorithms called CNN and UNET. Both algorithms trained separately on BRATS brain tumour dataset and then predicted output of both algorithms will be merge or map to generate final segmentation and the output generated is giving high dice score after mapping both algorithms segmentation and then predicting final segmented output. Dice score refers to correctly mapping of segmented parts in the image . The task is to develop an automated brain tumour segmentation method, for successful delineation of tumours into intra tumour classes with improved efficiency and accuracy in comparison to existing methods.

Accomplishments that we're proud of

To implement this project we are using 4 different images and this images are called as FLAIR, T1, T2 and T1CE and the label segmented image. The multi-institutional dataset, acquired from 19 different contributors, contains multimodal MRI scans of each patient, namely T1, T1 contrast-enhanced (T1ce), T2-weighted (T2), and Fluid Attenuated Inversion Recovery (FLAIR), from which the tumoural sub regions are segmented. The data is processed to overcome discrepancies such that they are skull-stripped.

What we learned

Automated segmentation of brain tumour from multimodal MR images is pivotal for the analysis and monitoring of disease progression. As gliomas are malignant and heterogeneous, efficient and accurate segmentation techniques are used for the successful delineation of tumours into intra tumour classes. Deep learning algorithms outperform on tasks of semantic segmentation as opposed to the more conventional, context-based computer vision approaches. Extensively used for biomedical image segmentation, Convolutional Neural Networks have significantly improved the state-of-the-art accuracy on the task of brain tumour segmentation. In this paper, we propose an ensemble of two segmentation networks: a 3D CNN and a U-Net, in a significant yet straightforward combinative technique that results in better and accurate predictions. Both models were trained separately on the BraTS-19 challenge dataset and evaluated to yield segmentation maps which considerably differed from each other in terms of segmented tumour sub-regions and were ensembled variably to achieve the final prediction

Built With

  • cnn
  • unet
Share this project:

Updates