Inspiration
Caregiving is often reduced to a checklist: medication, meals, and hygiene. But in the daily grind of tasks, the human connection often breaks down.
- Caregivers are burnt out, struggling to manage their own emotions while facing resistance from loved ones.
- Care Recipients (our seniors) feel a loss of autonomy and dignity, often becoming "patients" rather than family members.
We realized that while there are many apps for managing tasks, there was nothing for managing relationships. Inspired by the Tsao Foundation’s mission that "Longevity is Opportunity," we asked ourselves: How can technology restore dignity, joy, and mutual respect to the caregiving journey?
We built Help4Good to be the "Emotional Intelligence Engine" for families—a safe harbor where caregivers can find the right words, and seniors can find a safe space to be heard.
What it does
Help4Good is an AI-powered Relationship Coach and Mental Health Companion designed for the dual needs of the care ecosystem:
- For the Caregiver (The Co-Pilot): It acts as a real-time communication coach. When a caregiver is frustrated (e.g., "He’s refusing to eat!"), the bot helps translate that frustration into empathy, offering de-escalation scripts and evidence-based psychological advice to resolve conflicts without hurting feelings.
- For the Care Recipient (The Safe Space): It provides a judgment-free zone to vent about sensitive topics—like the loss of a pet or feelings of loneliness—that they might hide from their family to avoid being a "burden."
- The "Memory" Factor: Unlike basic chatbots, Help4Good remembers. If a user mentions a job loss or a brother's passing, the bot retains that context across sessions, ensuring users don't have to re-explain their trauma.
- Safety Net: It detects crisis patterns. If a conversation spirals into severe distress, it automatically surfaces local Singapore support resources (SOS, National Care Hotline) as seen in our demo.
How we built it
We prioritized a "Hybrid AI" architecture that balances the speed of local processing with the intelligence of cloud LLMs, ensuring privacy and responsiveness.
- The Brain (LLM): We leveraged the Google GenAI SDK to access Gemma-3-4b-it. We chose Gemma for its superior instruction-following capabilities and lightweight efficiency, crucial for maintaining a conversational "therapist" tone.
- The Memory (RAG Pipeline): To give the bot long-term memory, we built a custom Retrieval-Augmented Generation (RAG) system:
- We used ChromaDB as a persistent local vector database to store conversation history.
- We used Sentence-Transformers (all-MiniLM-L6-v2) running locally to generate embeddings. This allows the bot to semantically search past conversations and "recall" details (e.g., "How is your brother doing after losing his cat?") instantly.
- The Empathy Engine: We fine-tuned our system prompts using Mental Health datasets from Hugging Face, grounding the AI’s responses in clinical best practices rather than generic advice.
- The Interface: We built the frontend with Streamlit (Python). This allowed us to rapidly iterate on a clean, high-contrast UI accessible for older adults, focusing on usability over complexity.
Challenges we ran into
- The "Robotic Therapist" Problem: Early versions of the bot gave generic, cold advice. We had to iterate heavily on the System Prompts and leverage the Hugging Face datasets to teach the model how to "validate feelings" before offering solutions—a key technique in counseling.
- Context Confusion: Implementing the RAG memory was tricky. Initially, the bot would retrieve irrelevant memories. We had to refine the similarity threshold in ChromaDB to ensure it only brought up past details when they were truly relevant to the current emotion.
- Balancing Safety vs. Privacy: We wanted to keep data local for privacy, but needed powerful inference. Finding the right balance between local embedding generation and cloud-based LLM inference took significant testing.
Accomplishments that we're proud of
- True Contextual Awareness: In our demo, you can see the bot remember specific details (like "Tom" losing his job, or the grief over a pet cat). It doesn't just reply; it connects the dots.
- Speed & Efficiency: By running embeddings and vector storage locally, our response latency is incredibly low, making the conversation feel natural and fluid.
- Crisis Intervention: We successfully implemented a safety protocol that detects high-risk keywords and seamlessly injects HTML/CSS cards with real Singapore helpline numbers, bridging the gap between AI support and human intervention.
What we learned
- Empathy is an Engineering Challenge: Coding "kindness" requires precise prompt engineering and high-quality data.
- The Power of Small Models: We learned that with the right RAG setup, a smaller, efficient model like Gemma can outperform larger models in specific, personalized tasks.
- Care is 360 Degrees: Through our research, we learned that you cannot help the senior without helping the caregiver. The solution must address the dyad, not just the individual.
What's next for Help4Good
- HelpTheraAI (Agentic Visual Support): Taking inspiration from agentic video workflows, we plan to evolve our text-based agent into HelpTheraAI—a fully visual "Theraflix" experience. By leveraging Gemini's multimodal capabilities, the AI will generate dynamic, comforting avatars to deliver advice with human-like warmth. We also aim to implement visual Reminiscence Therapy, where the AI instantly transforms a senior's spoken memories into soothing video streams. This visual-first approach is a game-changer for seniors with cognitive decline and positions Help4Good as a scalable, commercial-ready platform for holistic care.
- Voice-First Accessibility: We plan to integrate OpenAI's Whisper model for high-fidelity Speech-to-Text. Since many elderly users struggle with small screens and typing, enabling seamless voice interaction ensures Help4Good is accessible to everyone, regardless of dexterity or tech-literacy.
- Hyper-Local Dialect Support: To truly serve the heart of the Singaporean community, we aim to fine-tune our models to understand and respond in local dialects (Hokkien, Cantonese, Teochew, and Singlish), breaking down the language barrier for our pioneers.
- Wearable "Bio-Feedback" Integration: We plan to close the loop on caregiver stress by integrating with smartwatches. If the system detects rising heart rate or stress markers, it will push real-time "Micro-Interventions" (e.g., a "Breathe" guide) directly to the caregiver's wrist before burnout sets in.
Built With
- chromadb
- gemma
- google-genai
- huggingface
- python
- rag
- sentence-transformers
- streamlit
Log in or sign up for Devpost to join the conversation.