CareLink — Bridging the Gap Between Patients and the NHS

Inspiration

The UK's NHS is stretched thin. A GP appointment can take two weeks to secure, and when it finally arrives, the doctor has minutes to understand a problem the patient has been living with for months. Meanwhile, patients discharged after a procedure are handed a leaflet and sent home, often anxious and uncertain about what "normal" recovery looks like.

This gap felt personal. Watching a family member struggle to articulate a week of fragmented symptoms in a three-minute consultation — and separately, a friend discharged post-surgery with no idea whether their pain on day four was expected — made the problem viscerally real. There had to be a way to help patients arrive more prepared and to feel supported after they leave.

CareLink is that attempt.


What We Built

CareLink is a three-tool healthcare companion living entirely in the browser:

1. Smart GP Scheduler — A heatmap visualisation of clinic demand by time of day and day of week, so patients can make informed choices about when to come in. Reducing peak crowding is a public health problem that can be nudged with better information.

2. Symptom Journal — Patients log symptoms over time. At appointment time, the journal compiles everything into a structured clinical brief — with duration, frequency, severity, and associated factors — ready to hand to the doctor. No more "well, it started a few weeks ago, maybe a month..."

3. Post-Discharge Bot — A conversational AI assistant trained on recovery protocols. Available 24/7, it answers questions like "is it normal to still have a fever on day 3?" without burdening the GP line. Powered by Groq for low-latency inference.


How We Built It

The stack was chosen deliberately for speed and zero operational overhead during the hackathon:

  • Next.js 16 with React 19 gave us server components and API routes in a single codebase. The discharge bot's LLM calls live in /api routes — keeping the Groq API key off the client.
  • Groq as the inference backend. Its speed ($\approx 500$ tokens/second on Llama 3) made the chat feel instant, which matters enormously for anxious patients at 2am.
  • ** Tailwind CSS** Used to design our web pages
  • localStorage for persistence — deliberately avoiding a database. For an MVP, the patient's own device is the right place for their health data. No auth, no backend, no GDPR surface area.

The data flow through the application can be expressed as:

$$\text{User action} \rightarrow \text{React state} \xrightarrow{\text{API route}} \text{Groq LLM} \rightarrow \text{structured response} \rightarrow \texttt{localStorage}$$


Challenges

Structuring LLM output for clinical use. Getting the discharge bot to produce responses that are helpful without overstepping into diagnosis required careful prompt engineering. We settled on a framing of "recovery information guide" rather than "medical advisor", with explicit instructions to direct anything ambiguous to a clinician. The system prompt went through about a dozen iterations.

Making the symptom brief actually useful. A raw log of entries is not a clinical brief. We had to design a summarisation pipeline that grouped symptoms by body system, calculated recurrence rates, and flagged outliers — all client-side. The recurrence scoring formula:

$$\text{recurrence score} = \frac{\text{occurrences}}{\text{days tracked}} \times \overline{\text{severity}}$$

where $\overline{\text{severity}}$ is the mean severity rating across all logged entries for that symptom. Getting the presentation right — something a GP could scan in 20 seconds — took more iterations than the algorithm itself.

Ensuring the heatmap reflects real demand patterns. The scheduler heatmap uses a weighted smoothing pass over historical slot data so that a single unusually busy Tuesday doesn't permanently skew the model. The smoothed demand $\hat{d}_{t}$ at time slot $t$ is:

$$\hat{d}{t} = \alpha \cdot d{t} + (1 - \alpha) \cdot \hat{d}_{t-1}, \quad \alpha \in (0, 1)$$

We defaulted to $\alpha = 0.3$, which weights recent data meaningfully without overreacting to single-day spikes.

Glassmorphism in low-light conditions. The backdrop-filter: blur() approach looks beautiful on modern hardware, but degrades on older GPUs. We added a solid fallback and tested across devices — a reminder that healthcare tools are often used on older, cheaper hardware.


What We Learned

  • Constraint breeds clarity. No database forced us to think hard about what data actually needed to persist, and what could be ephemeral.
  • LLM latency is a UX problem first. Groq's speed wasn't a vanity metric — a 3-second response in a medical context feels like abandonment. Sub-second feels like support.
  • The hardest part of a healthcare product is tone. Every piece of copy — button labels, empty states, error messages — carries weight when the user is worried about their health.
  • localStorage is underrated for MVPs. Removing the auth and database layer cut our build time roughly in half and eliminated an entire class of security considerations. The trade-off (no cross-device sync) is acceptable at this stage.

What's Next

CareLink is an MVP, but the core thesis is sound. The natural next steps are:

  • Clinician-side dashboards to receive and triage incoming symptom briefs before the appointment.
  • NHS login integration for authenticated, portable health data.
  • Procedure-specific knowledge bases for the discharge bot — so a patient recovering from a hip replacement gets different guidance than one recovering from an appendectomy.
  • Live heatmap data connected to real appointment system APIs, replacing the smoothed historical model with actual real-time slot availability.

The codebase is deliberately architected so each of the three tools can evolve independently — or be embedded into existing patient portals as standalone widgets. The MVP proves the concept; the architecture is ready for what comes next.

Built With

  • browser
  • localstorage
  • nextjs
  • reacjs
  • taiwindcss
  • vercel
Share this project:

Updates