Inspiration
We started from a personal place — watching our grandparents age alone, surrounded by technology that was never designed for them. One moment stayed with us: a grandmother saying “I just want to talk to someone” to a voice assistant that replied with a weather forecast.
Older adults don’t need smarter gadgets. They need technology that listens, understands context, and recognizes the difference between “I feel a little sad” and “I fell and I can’t get up.” We built GrandCompanion because our elders deserve an AI that treats them like people, not prompts.
What It Does
GrandCompanion is an AI companion designed specifically for elderly users. It provides empathetic conversation, context-aware assistance, and safety monitoring, with a strong emphasis on accessibility, trust, and continuity.
When a senior types “I miss my son,” GrandCompanion doesn’t respond with generic comfort. It understands the emotional and social intent, surfaces a simple one-tap option to contact their son, and remembers that relationship for future conversations.
When someone says “I fell down and my chest hurts,” the system immediately recognizes a high-risk situation, shifts into safety mode, and can guide crisis intervention or initiate contact with emergency services or trusted caregivers.
Key Capabilities
- Context-aware conversation that remembers topics, emotional state, relationships, and preferences across turns
- Three-tier safety classification (SAFE / MEDIUM / HIGH) backed by eight specialized safety tools
- Dynamic, generative UI widgets that appear based on conversational needs (contact shortcuts, calming guidance, reminders)
How We Built It
GrandCompanion is implemented as a multi-agent system coordinated by an orchestrator. Each agent has a single responsibility, and no agent acts without full context.
At the core is a three-layer context system:
- Conversation Context – recent dialogue turns with speaker attribution
- Active Topic Context – what is currently being discussed, including unresolved or high-priority topics
- User Context – preferences, relationships, emotional baseline, and safety-relevant information
A dedicated Memory Agent maintains this context as the single source of truth. Every other agent (conversation, safety, UI, orchestration) receives a complete context snapshot before acting.
Safety classification is enforced globally: every user input is evaluated for risk before any conversational or UI response is generated.
Challenges We Ran Into
Multi-Agent Integration Complexity
Connecting agents using the A2A protocol and Google’s Agent Development Kit was one of our biggest challenges. Setting up JSON-RPC communication, routing, and discovery across multiple agents required deep debugging and iteration. This backend complexity consumed time we had originally planned for UI polish.
Context Fragmentation
Early versions suffered from agents operating in isolation. For example, a user could mention a fall and later say “I’m feeling dizzy,” and the system would fail to connect the two. Building a centralized, layered context system solved this, but required several architectural rewrites.
LLM Hallucinated Help Instead of Acting
Initially, the assistant responded with comforting language when users asked for concrete actions (e.g., “I need to call my son”). We fixed this by introducing explicit intent detection, confidence scoring, and strict rules separating emotional support from task execution.
Safety Sensitivity Tuning
Balancing safety without over-triggering was difficult. Statements like “I’m tired” must remain SAFE, while “I can’t breathe” must immediately escalate. We solved this with structured risk classification outputs and carefully constrained prompts.
Elder-Friendly UX Is Hard
Meeting WCAG AAA accessibility standards while keeping the interface warm and non-clinical required many design iterations. Small decisions — font size, contrast, spacing — had outsized impact.
Accomplishments We’re Proud Of
- A working end-to-end prototype built on unfamiliar technology under hackathon constraints
- Agent-driven UI control, where the AI dynamically shapes the interface instead of just chatting
- A reliable three-layer context system that preserves continuity across turns and topics
- Robust safety detection with structured classification, escalation logic, and auditability
- A privacy-first approach, proving meaningful AI companionship doesn’t require constant cloud data transfer
Most importantly, we learned deeply — about multi-agent systems, orchestration, safety-first design, and accessibility-driven product thinking.
What We Learned
- Context is everything: without a single source of truth, agents behave inconsistently
- Safety-first routing reshapes architecture, not just features
- Accessibility is the product, not a checklist
- Local LLMs are viable for companion-style applications, offering strong privacy and low latency
What’s Next for GrandCompanion
- Model Context Protocol (MCP) integration for calendars, health data, and smart home context
- Full voice interface with speech-to-text and text-to-speech
- Persistent memory with vector search for long-term emotional and behavioral patterns
- Caregiver dashboard for safety reports and emotional trend visibility
- Multi-language support, starting with Spanish and Portuguese
- Production-grade communication (e.g., Twilio, SendGrid) for real emergency workflows
- Advanced LLM-based intent and topic detection
- Native mobile app with offline support and deep accessibility integration
GrandCompanion is not about replacing human relationships. It’s about making sure no one feels invisible while waiting for the next human moment.
Built With
- a2a
- a2ui
- autoprefixer
- css
- google-agent-development-kit-adk
- postcss
- python
- react
- starlette
- starlette/uvicorn
- tailwind
- typescript
- uvicorn
Log in or sign up for Devpost to join the conversation.