Inspiration

This is what happens when two formerly pre-med students come up with a hackathon project. After seeing the demonstration the AutoML Vision Learning API in the workshop, we thought about how this API could be used in clinical applications. One of the most visual clinical conditions are rashes, so we decided to see if we could train the Machine Learning API to be able to correctly distinguish between 3 different common rashes - eczema, contact dermatitis, and drug rashes, as well as being able to identify clear, non-problematic skin.

What it does

The program implements Google's AutoML Vision Learning API, which was trained with a dataset consisting of a collection of multiple images of the 3 different rashes, as well as a few pictures of body parts with clear, normal skin, so that the API could learn the baseline skin condition compared to the dermatological conditions it was testing for. The API would then return confidence scores for the top 5 labels in the dataset, corresponding to how likely an input picture was consistent with one of the diagnoses. The label with the highest data score would correspond to the type of rash the picture is most similar to, which could be used as a diagnostic aid.

How we built it

The AutoML Vision Learning API was provided a total of 370 images of the three different rashes we wanted the program to identify as well as a few pictures of normal, clear skin for a baseline. Once the API was properly trained, Google provided the Python program source code that we included in our project repository and then imported within the main.py file.

Challenges we ran into

One of the first challenges was helping the AutoML API learn the labels correctly so that its ability to diagnose rashes based on pictures was more accurate. Our first dataset had the generic label rash, and we applied it to all of our pictures, since we were thinking that we wanted the API to differentiate between rashes and normal skin. However, without any pictures of normal skin/clear skin, the API was taught that any image it was fed was a rash, which resulted in a several false positives while testing. Once we added a few pictures of normal skin, the API was easily able to differentiate between clear skin and those with rashes.

Accomplishments that we're proud of

Testing of various test images found online and that we took of ourselves showed that the API was mostly accurate in identifying the types of rashes from the pictures it was provided. Some pictures that were tested returned a value of 90+% confidence that it matched the correct diagnosis.

What we learned

Karen gained more experience with implementing the back end using Python with a Flask framework for our web app, while Sydney and Ed got more exposure to the use of Python in web development, having only had C++ experience prior to this Hackathon. We also gained experiencing using the Google Cloudshell terminal to edit our files and commit changes to the Project GitHub, and also learned how to utilize the AutoML Vision Learning API and train it to identify desired images and classifications, using labels and example images for our dataset.

What's next for Table 2 - Revenge of the Itch!

Google Cloud Healthcare API - Google is working on something which will most probably far outstrip the work we've done here. We wanted to try and use it, but Healthcare is still in testing, and we needed to apply for permission to use it. Google Cloud Vision API - Google's pre-trained ML machines, which can recognize general images. We found that Cloud Vision is sophisticated enough to recognize when images are medical in nature, (giving it a 'medical' confidence score). We could have added more functionality to our app by also running input images through Cloud Vision, but this is outside the intended use of the application, and the added overhead of making a second machine learning analysis request did not warrant the extra functionality, especially when contending with the inconsistent wifi provided at the hackathon.

Built With

Share this project:

Updates