Inspiration
Looking through the challenges provided, we quickly caught onto the idea of patient safety - and after the workshop PRHI presented, where they mentioned that one piece of software many hospitals still used was made in 1970 - we decided we wanted to try and provide a better experience for doctors, especially GP's, who have been overrun since the COVID-19 pandemic. Considering the 5 categories set out by PRHI, we were initially drawn to medication issues, planning out a small reminder/checklist system for doctors or nurses, but we realised that this market was saturated - and relatively unhelpful. Instead, we focused on misdiagnoses, in order to prevent the possibility of harming patients by letting them go without proper help, or prescribing an unnecessary treatment.
What it does
While AI software is highly advanced, we ultimately chose not to have AI make the final decision - instead, the final decision should always be made by other professionals in the field. As such, we realized we could use AI as a "filter" of sorts, checking diagnoses with the symptoms and patients and giving a confidence value to see if a second or third opinion would be valuable. This also opens up another aspect to the app - it can also track how dangerous a possible misdiagnosis could be. If, for example, a patient was suffering from a cough, tight chest, and fever, a doctor could diagnose them with the Common Cold. However, the presence of a tight chest and fever could indicate something much more dangerous, which our app would output at a "severity" score. Finally, if the confidence is below a certain threshold or the possible severity is above a certain threshold, the case would be sent off to other professionals in the network for a quick peer review.
How we built it
The web app is built on Sveltekit. We used Svelte to construct the full front end of the application, setting up a user login system and the UI to input a patient and diagnosis using Firebase and the ability to upload files for the system to scan through Tesseract. The back end predominately works off of OpenAI's API, using function calls to produce JSON files that the front end can pick up and read again. We also used PostgreSQL and PGVector to perform vector similarity search, and we also used other disease APIs to retrieve information about diseases and their corresponding symptoms.
Challenges we ran into
First, was the idea itself - how would we make this idea into something that could feasibly be helpful? After all, an AI chatbot can only do so much in judging an actual doctor's diagnosis, and studies show they generally match up to a human doctor's average correctness. So, what benefit would this program even have? This is why we decided to have our system work as a review system only - decisions and judgments will still only be made by human doctors. This can allow our app to provide as much input as possible without ever providing something intended to be taken as concrete data. Instead, every output our app gives is a recommendation - confidence values, examples of clarifying questions, etc. Another challenge we came across was integrating the front-end and back-end, as the majority of our group members had primarily worked in React before, and so adapting to a completely new framework caused some intermittent issues. Working with OpenAI was also a bit of a challenge, as while we had worked with the API before, we hadn't ever used the Function Call system, which took a bit of deciphering.
Accomplishments that we're proud of
We are particularly proud of the UI, which we took special care in making as neat and understandable as possible. Since this app is relatively novel, we couldn't really rely on the familiarity aspect of UX/UI design, so instead, we tried to focus on other aspects of UI design that we considered important. So, we made sure to make it simple but without too much negative space and minimize the amount of "pointless clicks."
What we learned
As we mentioned, the UI took a good bit of effort, and so the consideration we put into the design of the project brought us each out of our comfort zones a bit. The extra emphasis put on UI design made sure we all had to focus on it, even if it was just for a tiny part of each of our portions. Prompt generation was also a big part of the learning process, as we specifically didn't want to make a chatbot, and as such, the prompts had to be much more sophisticated in order to be able to handle the complex data output we were hoping for.
What's next for Dr. Squared
We hope Dr. Squared highlights the potentials of AI outside of a chatbot in other fields, as it feels like many other fields turn to chatbots to shoehorn in AI where it doesn't belong. AI has a lot of potential in analysis - and while it generally shouldn't be relied on as a definitive source, this project can showcase a way it can be used in an analytical way without taking away the agency and expertise of actual humans.
As far as future features go, Dr. Squared envisions a more collaborative diagnostic landscape. We aim to implement a community system, enabling verified doctors to review and contribute to cases, fostering a collective medical intelligence. Furthermore, we're exploring the integration of computer vision to enhance diagnostic capabilities with radiological scans, adding a layer of visual data analysis to our arsenal. Crucially, we're committed to empowering patients by involving them directly in the diagnostic process, allowing them to verify symptoms and become active participants in their healthcare journey. These advancements are not just features—they're a leap toward a more interconnected and transparent medical ecosystem where technology and human expertise create a harmonious rhythm for patient safety and accurate diagnosis.
Log in or sign up for Devpost to join the conversation.