Inspiration

The inspiration came from the fact of how Alzheimer's diagnosis can take months or years as patients bounce between specialists. Critical observations get lost along the way especially since specialists often receive just a total score and vague notes, so they end up starting from scratch. RapidRef fixes this by giving specialists the context they need upfront with a comprehensive referral report.

What it does

RapidRef helps primary care physicians administer cognitive assessments like the MoCA test. The platform walks clinicians through each question, tracks scores, and captures clinical observations. Using AI, it structures notes, pulls out patterns that usually get lost on paper, and provides suggested recommendations for next steps. Things like whether a patient needed memory cues, hesitated on specific tasks, how long they took to answer, or if they showed visible signs of difficulty. The PCP gets a centralized place to see test results, all their notes, and suggested recommendations for next steps. Everything compiles into a PDF report for specialists. The idea is simple: patients start at step 3 instead of repeating step 1 with every new doctor.

How we built it

Frontend: React.js, TypeScript, Tailwind CSS, ShadCN Backend: Node.js, Express.js, Gemini API Infrastructure: Firebase (Authentication, Firestore), Vercel, Railway

Challenges we ran into

The main challenge was keeping the assessment clinically valid while digitizing it. The MoCA has specific scoring rules and administration protocols so any deviation can throw off the results. We spent a lot of time reading official scoring documentation and iterating until the digital version matched the paper test exactly. Additionally, memory recall was trickier than expected because scoring looks simple (1 point per word), but capturing how words were recalled (freely vs. with cues) required building a specialized question type to preserve the clinical nuance.

Accomplishments that we're proud of

RapidRef is a first step toward digitizing the Alzheimer's diagnostic workflow. RapidRef successfully digitizes the full MoCA assessment with scoring validated against official protocols. The platform generates structured PDF reports with AI-powered pattern analysis and recommendations in seconds. It has not been tested in a clinical setting, but the architecture is designed for real-world adoption. It shows that clinical tools can speed things up without replacing clinician judgment and adding complex technology requirements. In the end, patients spend less time in diagnostic limbo and get to treatment faster.

What we learned

We struggled early on with how to model test data in a way that was both valid and flexible. The solution was a generic TestDefinition schema that separates test structure from content. This allows the same engine to run MoCA, Mini-Cog, or SLUMS without changing code. For AI insights, we learned that prompt engineering matters a lot. Vague prompts give vague outputs. Specifying exactly what specialists care about made the difference between generic summaries and actually useful observations.

What's next for RapidRef

We want to add support for more cognitive assessments (Mini-Cog, SLUMS, MMSE) and track patient data across visits over time. With enough data, machine learning could help predict diagnosis timelines, giving practitioners a better sense of how to prioritize follow-up care. We also want to support uploading paper tests to extract insights from drawing tasks using multi-modal analysis and to store test snapshots on the platform for future reference.

Built With

Share this project:

Updates