Inspiration:

By 2050, the global population of adults over 65 will double. And the impact of that becomes obvious the moment technology enters the picture.

Think about the last time you helped a parent or grandparent do something online. Maybe it was refilling a prescription. Maybe it was booking a ride to a doctor's appointment.

Now imagine you're not in the room. You're in another city. Your window into how they're actually doing is a ten-minute phone call where the answer is always, "I'm fine."

Modern software was not designed for aging. And the assistants that exist today, the timers, the song players, the weather readers, were not designed for the real complexity of independent living. They can answer a question. They cannot complete a task.

Furthermore, cognitive decline often begins years before a family notices anything wrong. By the time patterns become obvious, the window for early follow-up has already closed.

We built Sage to close that gap for the older adult who deserves technology that works with them, and for the caregiver who deserves something more concrete than intuition.


What it does:

Sage is an easy to use AI support agent with two interfaces: one for the older adult who needs help, and one for the caregiver who needs clarity.

1. Voice-First Task Automation

There's no app to open, no menu to navigate, no flow to memorize. The user speaks - naturally, messily, incompletely - and Sage handles the rest.

"I need one of those weekly pill organizers, and I forgot which one I got last time."

Behind the scenes, a supervisor agent built on LangGraph interprets the intent, routes to the appropriate sub-agent, and hands off to Stagehand for live browser execution. Sage navigates to Amazon, retrieves the right item, and confirms the outcome in a calm, human voice powered by a custom ElevenLabs voice clone. The user never touched a single menu.

2. Patient Context and Memory

Sage is not stateless. It knows who the patient's doctor is, where they live, which pharmacy they use, and what they usually order. This context lives in Snowflake and is accessible across every session, so the assistant never starts from zero. When a user says "I need to get to Dr. Mehta on Thursday," Sage already knows who Dr. Mehta is.

3. Consent-Based Cognitive Signal Analysis

After each session, the full transcript is analyzed for measurable speech and language markers: repetition, hesitation, lexical diversity, word-finding patterns, circumlocution, topic drift, and self-correction. This is not a diagnosis. It is a structured signal - a session score, a severity label, and evidence phrases pulled directly from the conversation - that helps caregivers notice patterns before they become undeniable.

The scoring starts from deterministic transcript analysis. OpenAI then turns those metrics into a readable, human-language summary. We are not asking a model to invent a medical conclusion. We are making real patterns visible.

4. Caregiver Intelligence Dashboard

The caregiver experience is a completely separate interface: different emotional register, different information density. It shows the latest session score, short and long-range trend windows, annotated transcripts with highlighted evidence phrases, and session history with summaries.

5. Ask Sage: Natural Language Queries Powered by Snowflake Cortex Analyst

Instead of digging through rows of session data, caregivers can ask plain English questions:

  • "Has her word-finding gotten worse this month?"
  • "What changed in today's session compared to the last three?"
  • "How many sessions this week looked more concerning than her baseline?"

Snowflake Cortex Analyst translates those questions into data queries and returns answers that feel like a conversation, not a BI report.


How we built it:

Sage has two main surfaces backed by a shared data layer.

1. Older Adult Experience: Electron Desktop Overlay

The patient-facing product is an Electron overlay with a warm, ambient orb interface. It is designed to feel like a calm presence, not a piece of software.

  • Voice pipeline: OpenAI gpt-4o-mini-transcribe for speech-to-text; ElevenLabs Flash v2.5 with a custom Sage voice clone for text-to-speech at ~75ms latency
  • Agent orchestration: LangGraph supervisor that routes multi-step, stateful tasks across specialized sub-agents
  • Browser automation: Stagehand (Browserbase) for live web execution — Amazon order flows, scheduling, pharmacy lookups
  • Routing and recovery: gpt-4.1-mini for fast intent routing; gpt-4.1 where browser reliability required a stronger model

2. Caregiver Dashboard: Next.js Web App

A separate frontend, built in Next.js, with a deliberately different visual tone from the patient experience.

  • Cognitive analysis: A deterministic transcript-scoring module that measures markers directly, before any model touches the data
  • Caregiver summaries: gpt-5-mini turns structured metrics into readable session summaries and follow-up prompts
  • Ask Sage: A plain-English query interface wired to Snowflake Cortex Analyst

3. Snowflake as the Data and Intelligence Layer

Every session, transcript, cognitive score, evidence phrase, and patient context record lives in Snowflake. Snowpark handles trend aggregation and derived metrics. Cortex Analyst sits on top of that structure and makes it conversational. Snowflake is not a storage decision. It is the product's intelligence backbone.


Challenges we ran into

Getting the voice right. One difficult task was to respond the way a calm, trustworthy helper actually sounds. When an older adult repeats themselves or sounds uncertain, the wrong response is to push the task forward. We spent significant time tuning the empathy layer so Sage stabilizes the interaction first.

Framing cognitive analysis honestly. The line between "this might be worth paying attention to" and "this is a diagnosis" is easy to cross accidentally. We rewrote that framing several times. Every caregiver-facing surface has a clear signal that this is pattern visibility, not medical conclusion.

Browser automation live on stage is genuinely hard. Websites change without warning, networks hiccup, popups appear out of nowhere, and flows that worked perfectly in testing can behave completely differently in a new environment. Getting Stagehand to navigate reliably across real websites required a lot of careful engineering but we made it work.

Wiring Snowflake into a fast-moving hackathon build. Getting Cortex Analyst to return clean, readable answers, not raw SQL output, required careful schema design and significant prompt iteration on the analyst setup. It took longer than expected and was worth it.


Accomplishments that we're proud of:

Two interfaces, one coherent product. The older adult experience and the caregiver experience feel nothing alike - deliberately. Building two emotionally distinct surfaces that share a unified data layer, under hackathon pressure, was harder than either surface alone.

A cognitive analysis pipeline that starts from evidence, not vibes. The deterministic scoring module runs first. The model layer summarizes what the data already found. That distinction matters: we are not hallucinating a health signal. We are surfacing one that was already there.

Ask Sage actually works. Getting Snowflake Cortex Analyst to return answers that feel like a helpful response rather than a query dump was a real technical win.

The voice is actually warm. We put real thought into the ElevenLabs voice clone. The custom Sage voice is calm, unhurried, and human. For this user, that is not an aesthetic decision. It is a trust decision.


What we learned:

Spending real time on the problem before touching the code is worth every hour. We read research on cognitive decline, caregiver strain, and HCI for older adults before we wrote a single function. That time shaped every product decision we made.

Building for emotional context is a constraint just like a technical constraint. The patient experience had to feel invisible. The caregiver experience had to feel honest without feeling clinical. Holding both of those in mind at once is its own kind of design challenge.


What's next for Sage:

Expanding the browser automation to cover more meaningful care-adjacent tasks - pharmacy refills, healthcare portal navigation, insurance lookups - with the same reliability bar we set for the demo.

Longitudinal baseline tracking, so the cognitive signal layer compares a person to their own history, not a generic reference point.

Piloting with real families in our communities not as a product launch, but as the feedback loop that makes the system honest.

Built With

Share this project:

Updates