Inspiration
We never had formal education on how to navigate healthcare systems, insurance options, or medical billing. Like many people, we learned during stressful moments: deciding where to go for care, figuring out what insurance would cover, and then dealing with confusing bills afterward.
That gap inspired Should I See A Doctor?. The goal is helping people make safer, smarter, and more affordable healthcare decisions rather than to outright diagnose.
What it does
Should I See A Doctor? is an educational care-navigation app that combines triage guidance, cost transparency, insurance literacy, and bill review in one workflow.
- Symptom Check: Users type or speak symptoms and receive safety-first triage (
MILD,MODERATE,URGENT,SEVERE) with a recommended treatment route. - Cost-Aware Options: Displays all care options (Self Care, Telehealth, Urgent Care, Emergency Room) with location-aware pricing context.
- Insurance Guidance: Generates profile-based recommendations with actionable next steps and official resource links.
- Bill Analyzer: Users upload receipt images or enter charges manually; the app estimates local averages and flags potentially overpriced line items.
- Personalization: Profile data (first name, city/state, insurance context) tailors guidance and dashboard experience.
How we built it
We built the app as a full-stack web product:
- Frontend: Next.js App Router + TypeScript
- UI/UX: Tailwind CSS + reusable components + Framer Motion
- Authentication/Data: Firebase Auth + Firestore (with local fallback mode)
- AI Integration: OpenRouter API routes for:
- symptom triage + treatment price context
- receipt image parsing
- bill average-price analysis
We also built robust JSON parsing/normalization and deterministic fallbacks so the app remains usable even if AI responses fail.
Challenges we ran into
Deciding how complex the diagnosis and bill analysis flow should be for an MVP was one of the hardest parts. We originally planned to use specialized healthcare datasets, but many useful APIs are protected or difficult to integrate quickly. We used free-to-access LLMs to validate product viability for hackathon speed.
Other challenges:
- Handling model/provider reliability and rate limits
- Enforcing structured LLM outputs for product-safe flows
- Keeping guidance useful while clearly non-diagnostic
- Designing resilient UX when AI calls fail
Accomplishments that we're proud of
- Built and shipped an end-to-end deployed product during hackathon constraints
- Combined four real pain points into one flow: triage, cost awareness, insurance guidance, and bill transparency
- Added fallback behavior so the app still works when AI is unavailable
- Incorporated location-aware context for more practical recommendations
- Delivered a polished, interactive UX with voice input and actionable next steps
What we learned
- Reliability and fallback design are just as important as model quality
- Structured outputs are essential for safe, dependable AI product behavior
- User context (city/state/profile) significantly improves recommendation usefulness
- Health-adjacent tools need clear safety framing and trust-forward UX
- Narrow, resilient MVPs beat broad but fragile prototypes
What's next for Should I see a doctor?
- Integrate richer datasets for stronger recommendation and pricing accuracy
- Improve insurance specificity (eligibility nuance, subsidy guidance, plan comparisons)
- Expand receipt parsing with stronger OCR/PDF pipelines
- Reduce long-term dependence on large LLMs using targeted NLP/classification + curated data
- Potentially add a lightweight assistant (small model + RAG) for explanation and navigation support
- Add stronger evaluation with safety review and measurable impact metrics
Built With
- firebase
- firestore
- next.js
- openrouter
- tailwind
- typescript
Log in or sign up for Devpost to join the conversation.