Inspiration

Any type of cancer is somehow deeply dangerous if not deadly. To see how bad the situation is, let us look at some of the stats given by the skin cancer organization:

  • One in five Americans will develop skin cancer by the age of 70
  • Actinic keratosis is the most common precancer; it affects more than 58 million Americans
  • The annual cost of treating skin cancers in the U.S. is estimated at $8.1 billion: about $4.8 billion for non-melanoma skin cancers and $3.3 billion for melanoma.
  • Skin cancer represents approximately 2 to 4 percent of all cancers in Asians
  • Skin cancer represents 4 to 5 percent of all cancers in Hispanics
  • Skin cancer represents 1 to 2 percent of all cancers in blacks

Analyzing cancers isn't an easy task. It requires intensive examining. More than 50% of lesions are confirmed through histopathology (histo), the ground truth for the rest of the cases is either follow-up examination (follow_up), expert consensus (consensus), or confirmation by in-vivo confocal microscopy (confocal). The lack of experts(radiologists) has always been a bottleneck. Can deep learning help here?

What it does

The skin cancer analyser helps to detect skin cancer at an early stage. The app captures frames directly from the main camera of the phone, analyzes the lesion if present, and provide three most probable cases out of seven types of skin cancers .

How I built it

The dataset was taken from Kaggle Datasets. tf.keras in the TF2.0 was used to train the models. All the models were trained using Google Colab. Once the best model was obtained, the model was converted to tflite model using the keras model converter utility. The tflite model was then shipped into an android app. The best part of is that the tflite model in the app uses the new GPU functionality provided by the delegates API. The inference time in the app when using GPU is ~30-45 ms

Challenges I ran into

First and the biggest challenge was the huge class imbalance and that too in a very small dataset. There were around ~8K images in total for 7 classes out of which a single class nv, consists of around ~63% of overall samples. Online augmentation Vs offline augmentation both was considered and it turned out that offline augmentation was needed for this dataset.

The second challenge was to build a model that can run on Android. When it comes to mobile, designing lightweight architectures become very crucial. I had the following to choose from: MobileNet, MobileNetV2, M-NasNet, and ShuffleNet. I chose MobileNetv2 for this application as it is much lightweight as compared to others and runs pretty fast with decent accuracy.

Accomplishments that I'm proud of

I have successfully trained a model and converted it into an Android app which runs on CPU as well as GPU. I am not an android dev and this was a new thing for me to learn.

What I learned

I learned a lot about how android especially. I have designed architectures for mobile earlier as well with TF Mobile but TFLite is much much better and a very good framework compared to that. I have big hopes for the future of TFLite.

What's next for Skin Cancer Detection

This app is the first version that proves that we can provide better healthcare with deep learning models on mobile. Though there are implications of the app usage and we have to be very clear about when to use and when not to. We need to experiment on a large data a probably with a better architecture for improved efficiency.

Built With

Share this project:
×

Updates