Recently, one of our teammates' close friend was diagnosed with stage 2 Melanoma and was told that if he hadn't acted faster, it would've progressed to stage 3 and he'd only have a 60% chance of surviving. When asked why he didn't consult a doctor earlier, he said that he had given himself the benefit of the doubt, and chose to believe that it was just a normal mole as he didn't want to make a trip to the doctors just for a "mole". Because of this big mistake, he ended up being diagnosed with stage 2 Melanoma.
After doing some research on Melanoma, we found that in most cases, Melanoma was virtually indistinguishable from a natural blemish or mole. Furthermore, Melanoma is one of the most common cancers in the world, affecting more than 100,000 Americans every year. It is so prevalent due to the fact that many believe that Melanoma tumors are just moles or blemishes, and choose to ignore it. For this reason, we decided to build an intuitive app that is able to distinguish between melanomatous tumors from benign moles (And it has a 95.7% accuracy!).
By bringing this technology readily available on your phone, people will be less inclined to ignore the symptoms of Melanoma.
What it does
MelaNoMo' is an app that uses advanced deep learning and computer vision techniques to distinguish between malignant melanoma and benign moles/blemishes/skin defects. Because it is nearly impossible to distinguish by eye in most cases, the only other orthodox method used for a proper diagnosis is shave biopsies which have a reported accuracy rate as high as 95% (source). Our app has reported a test accuracy of 95.77%, which surpasses that of the shave biopsy, and takes a fraction of the time for results to be returned.
How we built it
We built our deep learning neural network using PyTorch. Our neural network consists of 30 fully connected layers, with nearly 2,000,000 parameters in total. The input layer is a normal 3x3 convolution layer, and is followed by 6 blocks that each consist of a depthwise separable 3x3 convolution layer, a batch normalization layer, an inseparable 1x1 convolution layer, followed by another batch normalization layer. It is then fed through a global average pooling layer, then finally through a linear layer which outputs to either "melanoma" or "not melanoma". We trained our deep learning model for 100 epochs which took around 8 hours, but left us with 95.77% accuracy on the test set. For our frontend, we used java to create an android app. The frontend and model are connected by a flask server through http requests.
Challenges we ran into
Because of the time restraints we were given for this hackathon, one of our biggest challenges was developing a model that would actually be able to distinguish between things that are impossible by eye, as well as training it to high accuracy. Within the first 24 hours, we built our first model, however, it kept throwing errors such as gpu memory limitations. Even after we fixed the previous model, it would train very slowly (70% accuracy after nearly an hour). To fix this issue, we then did some research into depthwise separable convolution layers, and implemented it into our model. Because of the change, we were able to train a large model within the time constraints we were given.
Accomplishments that we're proud of
We are quite proud of developing a model that has higher accuracy than that of the most common and modern way of diagnosing Melanoma. As well, it can return a diagnosis in an instant, unlike the weeks needed to receive results from a shave biopsy.
What we learned
We learned a lot about the medical and healthcare fields, and how deep learning can be applicable, and sometimes even be better than the most common diagnoses. We've only scratched the surface, but in the future, we plan to learn more about how deep learning can be applied to the fields, and how we can make a difference.
What's next for MelaNoMo'
For the next step, we plan on releasing it onto the app store so that the app is available to everybody.