Inspiration

Diabetes is a devastating health condition, with 1 in 10 people in American being diagnosed with it. Over 84% of patients are unaware they have diabetes, and this is largely due to the fact that there lacks an objective assessment tool for automated detection and survival prediction. Diabetic retinopathy (DR) is an eye condition that can cause vision loss and blindness in people who have diabetes, which can be detrimental to both quality of daily life and survival chance of victims. It is crucial for diabetic retinopathy to be diagnosed both quickly and accurately; however, the inconsistency between clinicians exacerbates the current poor treatment of DR. Seeing the detrimental effects of diabetic retinopathy and lack of consistency from clinician diagnosis, our team decided to create EyesAIght: a machine learning web app that utilizes a variety of state-of-the-art models to revolutionize the ophthalmologic field. With our scalable web app made in Flask, our models can provide key insight into the severity of prediction of DR within seconds, with much higher accuracy. Not only can our web app help facilitate clinical decision-making on an even playing field amongst ophthalmologist experts, but it can also help patients have a stronger sense of certainty that their condition can be diagnosed with great accuracy.

What it does

EyesAIght has 3 main features: Diabetic Retinopathy (DR) severity prediction, blindness time prediction, and a report summarizer. Each of these features are displayed in the form of records, where doctors can add various types of records by selecting one of the 3 features. Once they fill out the necessary information required for each record, they can retrieve an output within seconds. All records can be conveniently displayed on a single page. Additionally, our app implements a login/logout authentication system, which enables a user to easily access their data.

For the DR severity prediction aspect of our web app, we used a Convolutional Neural Network (CNN) to predict the stage of DR based on an image of the patient’s retina. Our model classifies the retina on a scale of 0-4, with 0 being no DR and 4 being the most severe. Currently, detecting DR is a very time-consuming task, which requires a trained clinician to evaluate the retina. In addition to this, diagnosing patients can be extremely inconsistent amongst opthamologists, and our severity predictor ensures an objective and quick assessment of DR.

The next section in our web app predicts the chance of a patient going blind from DR over a course of 70 months. A doctor can enter demographic and treatment information about the specific patient. Our custom Cox Proportional Hazards Model, which is commonly used for survival-based regression and time-series tasks, will then subsequently create a graph with the percent chance of going blind over a course of months. Our graph contains two curves, one for if the patient is treated with specific laser types, and another if the patient is untreated. This is helpful for the doctor and patient as they can easily decide how soon they need treatment, and if they need treatment, without getting their retina scanned.

Lastly, we have a report summarizer for the doctor to easily view a summary of the patient’s condition write up. Similarly, patients can also view their doctor’s report in a more concise and organized format. Using a BERT extractive NLP model, our app is able to summarize a medical text report to the amount of sentences a user wants. Being able to easily display concise information can be extremely helpful for both doctors in patients for a number of reasons. Studies prove that more concise organization of reports can be much more readable and understandable for the reader. Additionally, doctors can often miss key information from their patient if a medical text report is disorganized, and so this summarizer ensures that no information is being lost in the process of communication with a doctor.

How we built it

Our website was built with HTML, CSS, and JS on the front end and Flask on the backend. Additionally, we used Firebase for the authentication and database to store records. As for the model, we used a variety of machine learning libraries such as Scikit-learn, Scikit-survival, Tensorflow, Keras, Numpy, and Pandas. All our data was found through Kaggle.

To start on our DR severity predictor, we used a CNN to classify a patient’s DR condition on a scale from 0-4. Using a pre-trained ResNet, we are able to effectively classify the status of a DR image into 5 categories. Additionally, several preprocessing steps were included such as cropping, filtering, and resizing the images to fit into the model. As for the backend integration, our Flask code downloads the image locally from the user input, transforms it into RGB values and resizes it, and then lastly feeds it into the model for a prediction which will be conveniently displayed .

Our second aspect is the blindness time predictor which uses a censored survival-analysis model called Cox Proportional Hazards to effectively predict the change of blindness over a period of 70 months. Once the doctor enters in the necessary demographic and treatment information for the specific patient, our model will generate two graphs that will signify the difference between a patient with and without treatment. Our flask backend generates this image using the FigureCanvas module in flask, which allows for image creation in the backend.

Lastly, for the report summarizer, we used a BERT extractive NLP model that summarizes the information that is inputted. Although this is a pretrained model, it was specifically contextualized for diabetic retinopathy.

All 3 features are visible on the add report page, and when a doctor creates a report, the results go to the backend in Firebase. These records are then all visible on the view reports page, which contains both inputs and outputs from the doctor.

Challenges we ran into

One of the main issues we had was with data preprocessing. The data we found had a large amount of inconsistency and missing data points, which made it hard to easily integrate into the model. Additionally, using Firebase with Flask for our first time was a challenge but rewarding experience. Finally, creating an image for the blindness time predictor through the Flask backend was complicated.

Accomplishments we are proud of

We are proud of creating a fully functional app within the allotted time frame. We weren’t entirely sure if we would be able to finish the entire project because we were using new models and a new database (Firebase). Completing the project on time was an extremely rewarding experience for us.

What we learned

We learned a lot about machine learning and statistical analysis with censored models, as well as the integration of Flask and Firebase.

What’s next

In the future, we hope to deploy our project for anyone to use. We would also like to find more comprehensive sets of data that are extracted from several data sources to avoid bias. However, with the time and resources we had, we are extremely proud of what we created!

Built With

Share this project:

Updates

posted an update

There were so many features that it was difficult to show them all in the video! For example, in the all records page, we have a patient filter option where you can choose a patient and see only their results, as well as a patient summary card with the results of the most recent records in each of the 3 categories.

Log in or sign up for Devpost to join the conversation.