When melanoma is detected early, the 5-year survival rate is 99%. Yet, even though this is true, more than 2 people in the U.S. die from skin cancer every hour[1]. How can we be so good at preventing deaths when we detect skin cancer early, yet still have so many people die from skin cancer every year? Detection generally involves looking at images to determine whether or not the typical features of skin cancer are present or by comparing a series of images taken over time. However, looking for features of various types of skin cancer and looking at images not necessarily successively makes it quite difficult for a human to make diagnoses accurately. Every year poor diagnosis precision adds an estimated cost of $673 million towards managing the disease[2]. This sounds like a perfect problem for machine learning!

What it does

The webapp was designed to be incredibly simple. You simply open the webpage, upload your image of the skin lesion (the image should be at least 64x64px) and then click "Get Prediction". Our machine learning model will then predict the type of skin lesion present out of the categories: Melanoma, Melanocytic Nevus, Basal Cell Carcinoma, Actinic Keratosis / Bowen's Disease (Intraepithelial Carcinoma), Benign Keratosis (Solar Lentigo / Seborrheic Keratosis / Lichen Planus-Like Keratosis), Dermatofibroma and Vascular Lesion. The webapp will also inform you of what the method of detection was for similar images so you can decide whether you would like to follow up in a similar way! The categories for this are: reflectance confocal microscopy, histopathology, lesion did not change during digital dermatoscopic follow up over two years with at least three images and consensus of at least three expert dermatologists from a single image.

How we built it

The model is a convolutional neural network (CNN) built using Tensorflow in Python. The ISIC2018 Task 3 training set of over 13,000 images was used to train the models. Notably, before any training was done the images were first randomly manipulated (i.e. rotated, stretched, sheared etc.) to add more robustness to the model as most images were taken by the same hospital and thus, look very similar. Furthermore, both models (for skin lesion type and diagnosis) were built differently and more information can be found in the "Project.ipynb" file in the GitHub repo! The design for the webapp was done in Figma and was then built in React with a Flask backend that allows the webapp to send user uploaded images to the backend, make a prediction with Tensorflow in Python, and then send the result back to the frontend.

Challenges we ran into

It was our first time using Flask and so we found it quite difficult to get the frontend to communicate with the backend. We managed to get this working after some help from the technical managers at the bootcamp! Our React skills were a bit rusty so we spent a lot of time on working out some bugs in the code to make the webapp look nicer, such as the addition of a cross symbol to delete images, not showing an empty image box when no image had been uploaded etc.

With the model, it was quite difficult to train a very accurate model with the training set. A large majority of the images fall under the category "Melanocytic Nevus" and as such our model was initially only predicting this category, and similarly for the diagnosis type which was overwhelmingly "histopathology" or "lesion did not change during...". As such, the type of skin lesion model was only able to perform at 78% accuracy on the validation set and the diagnosis model performed at 88%. Although these numbers are relatively low for the medical industry, with a better training set we are sure that a machine learning model could perform much better.

Accomplishments that we're proud of

We are very happy with how the webapp turned out and how the Figma design came together. The look is very simplistic and not very in your face, which was the goal when designing a webapp around the detection of a serious illness. We are also proud that we managed to get the frontend to communicate with the backend and that we were able to learn some Flask at the same time!

What we learned

We learnt how to use Flask to have the frontend of our webapp communicate with the backend. We also were able to develop our React skills further to produce a webapp that we were proud of.

What's next for Skin Lesion Detection

The next step will be to acquire a better training set that has more images for the other categories. This will allow for a better model to be created and for more accurate predictions in the future!


[1] The Skin Cancer Foundation. “Skin Cancer Facts & Statistics.” The Skin Cancer Foundation, 13 Jan. 2021,

[2] Bhattacharya, Abhishek et al. “Precision Diagnosis Of Melanoma And Other Skin Lesions From Digital Images.” AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science vol. 2017 220-226. 26 Jul. 2017

Built With

Share this project: