We want to improve and scale the health-care delivery by leveraging the advances in cognitive services. Specifically, we would like to leverage the available public datasets and outstanding AI/ML tools to create robust and scalable tools for research and education in the healthcare sector. We focused on clinical images of the retina (Fundus photos) for this competition because those can be used to diagnose high blood pressure, diabetes, increased pressure in the brain and infections like endocarditis.
What it does
Our toolset classifies images of the retina to detect Diabetic Retinopathy, and uses object detection to locate the Optic Disc within an image. Then, we use a simple-minded approach to determine it is a left-eye image or a right-eye image using the bounding box of the Optic Disc.
How we built it
We used the Azure Cognitive Services Custom Vision (customvision.ai) as our AI/ML compute engine for the classification and detection tasks. We build compact models so that we can leverage them on smartphones and desktops. We wrote a little C# utility so sort the images (about 4000) so that they could be tagged easily, and uploaded them into a custom vision project. Although, we budgeted 6 hour for the model building, we were able to generated a compact general classification model for the five categories of Diabetic Retinopathy in an hour that turned out to be quite good! We determined the coordinates of the bounding box of the Optic Disc in 50 images and used those images to create our second project in Custom Vision as an object detection project. We then wrote C# based prediction engines using the SDK to call the prediction end-points to classify test images, and to detect the Optic Disc in those images.We validated the output from the C# programs by running the same tasks directly on the customvision.ai portal. The classification probability numbers matched and the detected Optic Disc regions matched when compared the C# output with the portal output.
Challenges we ran into
We explored several other tools before we picked customvision.ai. That exploration took a while but we were able to learn the customvision.ai interface very quickly and then things moved rapidly. Priya was able to come with a good approach for calculating the bounding boxes for the Optic Disc training images using the pixel locations.
Accomplishments that we're proud of-
Our team member, Priscilla said: As a physician in the Ophthalmology subspecialty, a tool that can help to predict pathology efficiently and accurately in our images would be an invaluable tool in healthcare. I was able to witness our team’s program’s ability to accurately identify the optic nerve and the laterality of the eye. Although these are just beginning steps to identifying crucial features in an image, I can see this program’s applicability and potential for expansion towards identifying a wider range of features and ultimately predicting more complex pathology. Although the application of ML to diagnose Diabetic Retinopathy (DR) is not new, prior research has focused on building complex models that can't be used as inference engines on smartphones and desktops. Our trained DR classification model and trained Optic Disc detection model are designed to run on those devices, and that provides a much larger reach of these techniques compared models that have large compute requirements.
What we learned
It becomes easier to concentrate on the actual problem at hand if the complexities of model building are reduced. For example, we went beyond our initial goal of classification, and leveraged object detection to solve another important problem with the same dataset.
What's next for Eyedoc
We are building a small Universal Windows Platform (UWP) C# app that allows viewing of hard drive based images as a gallery of images, and allows the user to explore the details of any selected image. The selected image can be classified using the prediction end-point in the cloud.We are also planning to build an Android app. We are planning to provide our tools to the research and education community. Priscilla has the following perspective: Fundus photos are readily available and easily attainable on cellphones as well as high tech imaging devices in Ophthalmology offices. I see great potential for an application like this to aid in identifying pathology early in patients, following response to treatments, as well as use in research. Furthermore, its ability to easily integrate with a simple platform such as a cell phone or desktop makes this program accessible, which is crucial when developing a new healthcare application. I look forward to see where this projects go and plan to apply for grants at our University to expand the project.