Inspiration

The average Canadian ER visit is 4.5 hours. A chunk of that is not treatment. It is a nurse asking when your symptoms started, what medications you take, whether you have allergies. The same questions, every patient.

We are two students at Waterloo. We did not come into this with a healthcare background. We came in frustrated, the way most Canadians are frustrated with the healthcare system, and we started asking what part of the problem was actually solvable. Intake kept coming up. It is repetitive, it is structured, and it happens before the doctor even enters the room.

What it does

Before you see a doctor, you open Dr. Claudius and have a conversation with the AI. It asks about your symptoms, how long you have had them, what makes them better or worse, your medications, your history. One or two questions at a time. If you mention chest pain, the next question is about left arm numbness, not "rate your pain from 1 to 10."

When you are done, the doctor gets a structured brief. Triage level at the top, color coded. Red flags in their own box so nothing critical gets buried. A symptom timeline. A section with the patient's actual words in it, because how someone describes their pain matters. And a suggested focus for the physician.

The brief ends with: "For physician review only, not a diagnosis." That is not a disclaimer we stuck on at the end. It is what the tool is for.

How we built it

React, Vite, Tailwind, Claude API. We split the work across three parts: patient chat UI, AI logic and prompting, doctor brief panel. Two nights. One ended around 4am.

The system prompt took most of the actual work. Getting the model to ask useful follow-ups instead of marching through a checklist was harder than expected. The conversation needed to feel like talking to a person, not filling out a form. The brief generation prompt went through multiple rounds too. A doctor is not going to parse a wall of text before walking into a room, so the layout and hierarchy of the brief mattered a lot.

Challenges we ran into

Prompt engineering was the main one, and we underestimated it going in. The gap between a chatbot that collects answers and one that asks the right next question is bigger than it looks. We also have no clinical validation, and we want to be honest about that. The decisions we made on what to include in the brief, how to flag red flags, what triage criteria to use, those came from our own research, not from practicing clinicians. That is a real limitation of what we built.

We also ran out of API credits at 1hr before the submissions.

Accomplishments that we're proud of

The doctor brief looks like something a real clinic could actually use. Getting the red flag detection to work properly was the thing we were most unsure about going in. When a patient mentions symptoms that could indicate something serious, the brief flags it at the top in a way that is hard to miss. That took a lot of prompt work to get right and we are glad it does what we intended.

The chat also feels like a conversation rather than a form. That was harder to achieve than we expected and it is the part we are most proud of technically.

What we learned

How much a real intake depends on judgment that is hard to put into a prompt. And how quickly something that works in testing breaks when a patient answers in a way you did not anticipate. We also learned that designing for doctors and designing for patients at the same time pulls you in two different directions. The patient needs warmth. The doctor needs speed and clarity. Getting both right in one app is its own design problem.

What's next for Dr. Claudius

The brief right now is a panel you look at. In a real clinic it would need to connect to a scheduling system so it lands somewhere the doctor actually checks before walking in. Voice input would help patients who find typing difficult. And before making any claims about time saved, we need to sit with a real clinic and test it properly.

We showed an early version to someone in clinic administration. She asked if they could use it. We are still figuring out what that would take.

Built With

Share this project:

Updates