Inspiration

We built Remember Me for people living with Alzheimer’s and other memory-related conditions, and for their caregivers. The core inspiration was emotional: forgetting familiar faces, names, and recent conversations can be stressful and isolating. We wanted to create a gentle assistive tool that helps users feel more confident in daily social interactions.

What it does

Remember Me is a memory-support companion (not a diagnostic tool) that helps with day-to-day recall:

  • Recognizes familiar faces and links them to saved profiles.
  • Captures and processes conversation audio.
  • Generates transcripts with speaker labels (who said what).
  • Stores conversation/audio embeddings for semantic retrieval later.
  • Helps users and caregivers review recent interactions and context.

How we built it

We built the project with a mobile + backend architecture:

  • Mobile app for capture and user-facing memory prompts.
  • InsightFace deployed on the backend for face recognition inference.
  • Supabase for backend workflows and storage.
  • Supabase Postgres + pgvector for storing and querying audio embeddings.
  • Text diarization pipeline for speaker-labeled transcripts.
  • Tight backend-mobile integration focused on very low latency so support feels near real-time.

Challenges we ran into

  • Audio upload issues in Supabase: Larger files and unstable network conditions caused retries/failures we had to handle.
  • Latency pressure: We needed fast responses for a smooth assistive experience.
  • Diarization quality: Speaker labeling is harder in noisy/overlapping speech.
  • Sensitive use case requirements: Since this is for vulnerable users, reliability, clarity, and privacy mattered even more than usual.

Accomplishments that we're proud of

  • Delivered an end-to-end prototype tailored to memory support use cases.
  • Integrated face recognition + speaker-aware transcripts + vector retrieval in one flow.
  • Deployed InsightFace backend inference with mobile integration.
  • Built a foundation that can support both patients and caregivers in practical daily scenarios.

What we learned

  • Assistive AI must be dependable and simple, not just technically impressive.
  • Real-world performance depends heavily on integration and error handling.
  • Speaker-aware transcripts are much more useful than plain transcription for memory recall.
  • Privacy, consent, and data handling are critical when working with health-adjacent experiences.

What's next for Remember Me

  • Improve robustness for audio uploads and weak-network/offline scenarios.
  • Further optimize recognition and diarization accuracy in real-life environments.
  • Add caregiver-focused features (shared memory summaries, reminders, care notes, remote access).
  • Strengthen privacy controls (consent flows, retention controls, encryption).
  • Run user testing with caregivers/patients to improve accessibility and usability.

Built With

Share this project:

Updates