Inspiration

Try it here: https://skinguardai.com/

Back in 2018, Nicole McGuinness appeared in an episode of HGTV's Beachfront Bargain Hunt. But when the episode aired, it caught the attention of a doctor, Dr. Voight, who lived thousands of miles away. While he was watching the show, he noticed something abnormal on McGuinness' neck.

Given his experience in the field, he immediately realized that Nicole had a tumor growing on her neck. He managed to reach out to Nicole, through the TV channel and this saved Nicole’s life, as she was unaware that she had a skin cancer tumor growing on her neck. At the moment when this happened, Nicole was recovering from brain cancer.

Both Dariia and Alex (project founders) have a few moles on their bodies that the dermatologists they’ve been to, recommended they keep an eye out on. While, both Dariia and Alex are young and for now they only have to monitor the status of the moles, Dariia’s mother and Alex’s mother and father have to get their moles checked every 6 months, since they have a few on their body that were labelled by dermatologists as potentially dangerous. Alex’s mother even had 2 of her moles removed, since, according to the assessment of the dermatologists, they were starting to transform into melanoma cancer.

That’s when it dawned to us, since in our families the incidence of potentially dangerous moles is so high, we decided to see how skin cancer affects the American population. According to the American Cancer Society, around 5.4 million people are diagnosed with skin cancer every year. Roughly 1 in 5 Americans will experience skin cancer at one point in their lifetime. Melanoma cancer is one of the deadliest cancers, and if not caught early, can lead to catastrophic results. Alt text

This is how SkinGuardAI came into play. SkinGuardAI allows people to take pictures of their moles and receive realtime results on whether those moles are benign (safe) or malignant (dangerous) to their health. Moreover, after receiving the result, users can talk to a GPT-4-powered dermatologist that is there to answer any questions they might have regarding skin cancer and skin care in general.

What it does

SkinGuardAI allows users to upload or take a picture of their mole. The picture of the mole is then fed to our computer vision models (trained on just a bit more than 26k pictures) and users receive a result on wether their mole was classified as benign (safe) or malignant (dangerous). After receiving the test results, users have the opportunity to talk to an AI-powered dermatologist in real-time (powered by a finetuned GPT-4 model) and receive answers to any skin care or skin cancer questions they might have.

Alt text

How we built it

We expanded initial dataset by using Augmentations. We used MobileNetV1 model for the feature extraction. Input images were resized to the 224x224. As a result we’ve got feature vector with the dimentions of 1001. On top of the feature extractor we experimented with the 10 different classification heads ("Random Forest", "Neural Net", "AdaBoost", "Nearest Neighbors", "Linear SVM", "RBF SVM", "Gaussian Process", "Decision Tree", "Naive Bayes", "QDA"). Our Ablation Studies showed that the MLP model with 3 hidden layers and 200 hidden dims trained for 100 epochs worked the best here. Alt text

Our goal is to maximize the recall, meaning if the person is sick, we should have the highest confidence in determination of this. At the moment we reach out to the 87% of the recall here. But we still have a huge space for the models improvement by expanding the dataset, models finetuning, experimenting with different architectures. We might can reach out to the 96% of the recall.

Also, we’ve integrated GPT-4 for allowing our users to consult regarding next steps according detected results. It would help them to get information regarding their current state, potential risks and next steps. GPT-4 was prompted by: ‘You are a skin care expert who provides advice with skin care and skin cancer.’ Alt text

Challenges we ran into

We wanted to create something that will make a big impact.

It took some time to find proper data, explore and validate it, apply preprocessing techniques. Then struggled to find good feature extractor, that will be robust enough and will be able to demonstrate high classes separability in the PCA. We resolved it with the Mobilenet.

After we’ve trained the best model and evaluated results it was a challenge to deploy it, convert, and make it wonk in the browser. We resolved it with using Tensorflowjs.

Accomplishments that we're proud of

The biggest accomplishment that we achieved is when we sent the website to our parents and they could try and use picture of their moles. We did have a little scare, to where Dariia’s mother received a ‘Detected’ result, but it turned out that the picture contained her finger, which which affected the performance of the model. She did try it again, removing the finger, and everything was fine…phew!

Moreover, we’re proud of the development and training process that we created and executed on. We started working on this idea roughly 4 hours after the competition started, so we understood that being efficient and optimizing the work that we’re doing based on each team member’s strengths will be crucial to us having a complete submission. Dariia picked the computer vision related tasks, due to her research in the field and previous experience, while Alex worked on creating a website that allows users to upload or take a picture and on finetuning GPT-4, in order to allow users to get relevant information.

As mentioned above, the project blended in both GPT-4 and our own computer vision model. We trained our own computer vision model based on openly-available datasets, which amounted to just a bit more than 26k pictures of benign and malignant mole, and we also had the chance to fine-tune GPT-4 on tips and advice on how to better take care of moles on your body, in order to avoid associated risks.

Lastly, we’re proud to have a website that’s available online and that is not actually hard-coded (despite the fact that we’ll be needing a 16-hour sleep tonight, to make up for the lack of it during the hackathon).

What we learned

Finetuning GPT-4 was new for both of us, and applying our computer vision skills to a large issue has been an amazing educational experience.

What's next for SkinGuardAI

We have created a short term, medium term and a long term vision for our project. Alt text

Short term, we decided to start tackling skin cancer and help people prevent it or detect it early, with a relatively niche idea. Computer vision can be used to identify a wide range of skincare issues. Unlike the early signs of other types of cancer, the early signs of skin cancer are visible and anomalies can be detected early. Therefore, our first step in this journey, is to launch a mobile app on the AppStore, that allows people to carry out regular computer vision and GPT-4 powered checks of their skin. While this is being developed, we’ll be reaching out to a wide range of dermatologists and people that we’ll identify as our target users (i.e. people who have their moles checked regularly through dermatologist appointments etc.) in order to create a clear GTM strategy, identify opportunities for partnerships and expansion, and drive growth to our platform, as we try to reach a certain level of PMF.

Medium term, we have analyzed companies in the digital health space and their strategies after achieving a certain point in their growth process. We have noticed that companies like Hims, Calm (specifically the launch of Calm Health), Cedar and many others, started to offer tele-health and treatment/supplement prescription services to their clients. While, we believe that this might go continue to go into our long-term vision, we would like to set our selves on the right path, needed to accelerate growth. Our primary focus will be to transform SkinGuardAI into a tele-health service where users can benefit from regular AI-powered skin care tests. Those tests can then be monitored by our partner dermatologists and users or dermatologists can request to jump on a call if a potential health risk is detected. Our secondary focus would be to explore whether computer vision and GPT-4 can help dermatologists in making more accurate skin care predictions, as well as using computer vision and GPT-4 to detect, classify and offer treatment options to other skin care problems that are widely spread among the American population (i.e. psoriasis, sebhorreic dermatitis etc.). This would help SkinGuardAI offer high-quality services to people worldwide, on a broader set of skin care issues.

Long term, as mentioned in the section above, we would like to double down on our expansion of computer vision, GPT-4-powered models and tele-health services to more skin care problems, that do not affect only people in the US, but people worldwide. On top of this, we’ll be experimenting with offering prescription services to our users and creating-medical grade software that can aid doctors in detecting skin cancer in its early stages.

Built With

Share this project:

Updates