Inspiration

Once it forms, melanoma can quickly spread to other parts of the body, making it by far the most lethal form of skin cancer. Therefore, early detection is essential. Sridhar and I are both busy college students, and we understand why people may forgo making an appointment to see a doctor when they notice a skin growth. An application that can facilitate early detection (but by NO MEANS replace a doctor) would help catch cancer before it spreads, thus saving lives.

What it does

The application either takes a picture from the camera roll or directly from the camera. It then analyzes the picture and generates and displays the probability that the growth is benign or malignant.

How we built it

First, we downloaded and sorted approximately 4000 pictures of skin growths into two categories: benign and malignant. Then we used CoreML to generate a machine learning model for use in an iPhone application. Lastly, we downloaded source code that takes a picture and feeds the picture into a machine learning model, then displays the probability distribution generated by the model. This code is free for me to use and modify, and it is from the book Machine Learning by Tutorials by Ray Wenderlich.

Challenges we ran into

Sorting all of the pictures precisely was more work than we had initially planned for. It took hours to download all of the pictures as well as sort them into the respective groups. The pictures were all downloaded from [https://www.isic-archive.com/#!/topWithHeader/wideContentTop/main], which is the International Skin Imaging Collaboration website. However, the largest challenge we faced was with training the model. Neither of us are particularly experienced in training machine learning models, so it was necessary for us to experiment. Our first model was only around 60% accurate. To attempt to increase the accuracy, we decided to increase the number of training iterations from 10 to 25. We also decided to submit duplicates of the training photos that were cropped, rotated, blurred, flipped, had increased exposure, and had increased noise. However, when we attempted to do this, the model training time skyrocketed, so we were forced to cancel the training. Finally, we decided to train the model with 25 iterations that included cropped and rotated duplicates. This yielded a 76% accuracy on data that the model had never seen before, and the training this time only took about an hour.

Accomplishments that we're proud of

We trained our first major machine learning model to be 76% accurate, a large jump from its initial accuracy of 60%.

What we learned

We learned to use CoreML to train a model as well as of some of the difficulties that come along with training a machine learning model.

What's next for Aware

To take this to the next level, we first need to create a better model. To do this, we will need more data. Second, our model was trained on laboratory pictures that had very good lighting and clarity. Users will not take pictures as ideal as these, so it will be necessary to train on pictures taken from real users before deployment.

Built With

Share this project:
×

Updates