By 2060, 47% of Americans will have nonwhite skin. However, a CDC study from 2009-2013 demonstrated 47% of dermatologists felt that their training was inadequate to diagnose skin disease in skin of color. This may be contributing to the phenomenon that patients with skin of color in the US present with more advanced disease and have lower survival rates than their fair-skinned counterparts.
Under-representation of skin color is not only an educational issue but a patient safety issue. A UCSF dermatologist witnessed a delay in diagnosis and treatment for a patient with a rash that was subsequently diagnosed as toxic epidermal necrolysis because the ‘characteristic’ redness that dermatologists seek to make the diagnosis can be subtle in skin of color. In other situations, visual diagnosis was either debated or delayed until a more invasive biopsy revealed a common disorder that presented in a non classic way due to the patient's darker skin.
Recently, ML has been used to create programs capable of distinguishing between images of benign and malignant moles with accuracy similar to that of board certified dermatologists. Although ML algorithms can augment human decision making and has the potential to enhance the delivery of quality healthcare, ML algorithms must be trained, tested, and fine-tuned on a sufficient number of data points. Most existing ML programs are biased due to largely learning on light skin.
To avoid bias, we trained our deep learning algorithm on equal numbers of images in each skin tone category. This algorithm, integrated into a mobile app, will serve as a clinical decision support tool in real time to help diagnose the 'uncharacteristic' presentations of skin lesions.
What it does
Our deep learning algorithm was trained on images from the HAM10000 dataset and imported into Android Studio so that it can be used as a mobile app. After the user captures an image with the device camera, the photo is fed as an input into the algorithm for it to suggest a diagnosis. The user can then confirm or deny the app's suggestion, and then the algorithm will learn from this action to improve future predictions.
How I built it
We utilized the HAM10000 dataset which consists of dermatoscopic images which are a representative collection of important skin pathologies, including melanoma and basal cell carcinoma. We then used an algorithm to categorize these images based on skin tones. Using an equal number of images from each skin tone category to prevent introduction of bias against darker skin tones, the deep learning algorithm was trained to classify the images into the appropriate diagnoses. Training of our deep learning algorithm was done using Pytorch.
Challenges I ran into
Our team has very little coding and app development experience. A challenge we faced was integrating a machine learning algorithm into Android Studio to emulate our mobile app.
Accomplishments that I'm proud of
Being able to learn about deep learning algorithms, their applications, and their impact on the medical field given that our deep had little knowledge in this area before MedHacks.
What I learned
There is a lot of data out there to be used! We found that the HAM10000 dataset had been used in many other skin diagnosis applications, but not in the way we wanted it to be used--to aid in the diagnosis of skin lesions in darker skin in particular.
What's next for DermAssist
With patient consent, we hope to feed the images captured by users of our mobile app into a database so that we can create a open source database that is well balanced in skin color and representative of our global society. Then, these images can be further used for medical training and the creation of more advanced tools.