Inspiration:
Skin cancer is the most prevalent cancer in the world. Being able to quickly identify skin cancer provides higher rates of early intervention, which studies show dramatically increase survival rates of affected patients. Skinspection aims to accomplish this without little aid from a dermatologist or other expert through the plethora of available information in databases.
What it does:
Skinspection uses a database of previously taken photos of skin conditions that are tagged with the specific skin condition. The app then uses Microsoft Azure's Cognitive Services (Custom Vision) to compare an user inputted photo to the 10,000 pictures in the database. The comparison to the database allows Skinspection to provide a guess on what the skin condition shown in the user inputted picture is.
How we built it:
We first used Custom Vision to train a model. This was done by uploading pictures from the HAM10000 dataset and systematically tagging them with existing skin cancer classifications. We ran multiple iterations and tests within the Azure service. Then, we exported our CV model as a TensorFlow model. We then attempted to integrate this model into Android Studio to create the app itself.
Challenges we ran into:
The biggest challenge we encountered was implementing the TensorFlow model within Android Studio. We found sample code online of TensorFlow models that we were able to run within the app that we had already created the layout for. However, when we tried to replace the sample code with our own model, we found that the graph.pb in the assets folder could not easily be replaced by the models.pb format that our model was in. Unfortunately, we were unable to overcome this roadblock, but with further assistance/research into TensorFlow mechanisms, we believe this can be easily overcome.
Accomplishments that we're proud of:
We are proud of training our model with half of the HAM10000 dataset (5,000 images) that would greatly improve accuracy of our product. Additionally, we were able to make the Android Studio component take a picture and upload it to the app. No, we just need to combine these two functionalities through the TensorFlow model.
What we learned:
We learned that computer vision and artificial intelligence can help us tackle a prevalent issue in the medical world today.
Log in or sign up for Devpost to join the conversation.