Inspiration

The COVID-19 pandemic has vastly changed the way we approached our day-to-day tasks. Doctors are no exception. Due to lockdowns and various restrictions, they have been forced to move to a virtual environment. Furthermore, in Nova Scotia, where our team members are from, it was particularly difficult to schedule appointments due to longer than anticipated turn-around times. This led to non-urgent inquiries being pushed back and at times forgotten.

Dr. Screen was built as a supplementary tool to streamline family practitioners' jobs. Its goal is to reduce the amount of time that a practitioner would need to spend with patients for less serious inquiries such as diagnosis of patient conditions as well as prescription renewals. By streamlining simpler inquiries that require visits to the doctor, a doctor is able to help a greater number of patients throughout a day.

What it does

The main feature of our app is an integrated machine learning model that generates confidence probabilities of diseases given the symptoms provided. This was achieved by training a model on the top 20 most common symptoms and top 40 most common diseases. Patients would provide a list of symptoms and this would be processed by the ML model. The ML model would then provide the most likely diseases to the doctor. Through this, doctors would be able to have an idea of what the patient is feeling. This allows doctors to reduce uncertainties and thus make more well-informed decisions.

The other main feature of our app is an appointment scheduler that allows patients to schedule appointments for the doctor’s available time slots. While scheduling their appointment, the user is prompted to input symptoms that they are experiencing, as well as leave an extra note for the doctor. Upon the submission of the schedule appointment form, we run the patient symptoms through our Tensorflow machine learning model to obtain the five possible diagnoses that have the highest confidence percentages based on the inputted patient symptoms. We then present the patients symptoms, their possible diagnoses, and their confidence intervals to the doctor who can view this information prior to the appointment. This allows the doctor to head into an appointment already having an idea of why the patient is there and the possible diagnoses - saving time spent on performing the full diagnoses during the appointment itself.

The other big feature of our app is the ability of a patient to request prescription refills without the need to schedule an appointment with their doctor. Our safe mechanism of doing this starts by allowing the doctor to maintain each patient's prescriptions as well as their prescription history of past refills. The patient is able to hit a button to request a refill, which then notifies the doctor of this request. Upon being notified of a prescription request, the doctor is able to decline or approve the patient's request by going over the patient's last refill request which has information such as the doses given and the duration of the refill. This allows the doctor to make an informed and safe decision when giving out prescription refills.

Lastly, our app allows the doctors to schedule follow-ups with their patients to keep up to date on their condition. These follow-ups can be recurring or a one-time thing after the patient and doctor's appointments to allow the doctor to get a sense of the patient's progress. The doctor can also request the patient to upload pictures to the app to get visuals of the patient's condition if this is something that is required with the patient’s condition. If the doctor sees that the patient is not improving, the doctor can notify the patient to meet with him again.

How we built it

We built Dr. Screen using React, Express, Node.js, and Firebase. We chose to use Firebase for our database due to prior experience amongst team members and also for its easy to integrate user authentication mechanism as well as individual functions we could deploy to serve specific purposes such as image uploads to the cloud. Our machine learning model used to predict possible illnesses based on patient symptoms was built using Tensorflow. Furthermore, we used numerous node packages such as React Material UI for specific features found on the application.

For our model we used an MLP neural network trained on a dataset from Kaggle linking symptoms to diseases. We treat it as a multiclass-classification task such that the model outputs probabilities for each disease. Based on a simple selection of the highest probability disease, we were able to achieve 87.4% accuracy on a held out test set.

Challenges we ran into

The team faced various challenges throughout the duration of the entire hack. The main challenge that we had to face was integrating the Tensorflow machine learning model into the web app as it was the first time that our team members had to do this. Furthermore, formatting the input and output from the model was also another challenge that we faced. Another challenge we faced was constant collaboration between our codebases. With multiple team members working on the same codebase it was difficult at times to resolve conflicts. To solve this, we utilized the common Git mental model of “develop” branches and pull requests. Through this, there was a clear history of the changes made to the codebase.

Accomplishments that we're proud of

The team was proud to have the entire full-stack web application that was nicely communicating between components. Furthermore, we were proud of the machine learning model that we were able to build, train, and deploy, as well as the accuracy of the results that we were able to obtain from it. We believe that this model has the ability to seriously aid with the work done by practitioners.

What we learned

Our tech stack was hand-picked in regards to the experience of each of our team members. This meant that not all of us were familiar with parts of the stack, React being the main one. However, due to our team's experience in web development in other stacks, we were able to provide ideas rather than direct help to move past issues. Furthermore, we had to learn a lot on the fly to connect our Tensorflow machine learning model to the Node.js backend and had to write extensive logic to format the input to the model, as well as format the output array into something that we can display to the users on the front-end. Finally, we learned how to efficiently use Github to improve our developer productivity. Features such as branches and pull requests greatly increased our ability to collaborate as a team.

What's next for Dr. Screen

The next steps for Dr. Screen include a verification mechanisms for doctors joining the platform. Currently we have to manually authenticate the doctors that want to use our platform to verify their ability to practice. Ideally, we would want to automate this process through a document submission and approval process. Furthermore, the team is confident that we can implement further features to help streamline appointments that are booked with doctors that don't need to be done in-person. One such feature is to implement a patient request for blood work to be done. This process, like prescription re-fills, is something that a patient needs to visit the doctor to just get approval and could be automated via Dr. Screen.

Share this project:

Updates