Memora — Ask your life, get the proof
Inspiration
I kept asking simple questions with complicated answers—“When did I last renew my driver’s license?” “Where’s the receipt?” "When is my friend's birthday?" "When was the last time I been here?"
The proof existed (emails, PDFs, photos, notes) but was scattered across apps. I wanted a private, voice-first agent that returns a clear answer with receipts in seconds.
What it does
Memora lets you ask any life-admin question by voice and get a concise answer plus the exact proof (PDFs, photos, notes).
It highlights the supporting snippet, shows evidence thumbnails, and can set follow-ups (e.g., reminders before expiration). All data stays private by default.
How we built it
Voice & intent: ElevenLabs for multilingual speech-to-text → OpenAI converts the question into a strict Elasticsearch query plan.
Search & evidence: Elasticsearch retrieves the latest relevant “moment,” returns highlights and nested artifacts (files/photos).
Smart storage: On ingest, Fireworks embeddings add semantic recall across languages; originals live in Google Cloud Storage, shared only via short-lived signed URLs.
App layer: Next.js + React UI with Convex/Cloud Run actions orchestrating Plan → Search → Evidence → Answer; security enforced server-side (per-user filters, minimal fields).
Challenges we ran into
Multilingual recall: Matching “巴黎” with “Paris” and messy filenames—solved with multilingual embeddings + a short English summary field.
Latency budget (≤3–5s): Parallelizing STT/plan/search and trimming LLM context.
Evidence UX: Normalizing file types, thumbnails, and graceful fallback for expired signed links.
Privacy by default: Keeping LLM payloads minimal and guaranteeing per-user filtering on every query.
Accomplishments that we're proud of
Evidence-first answers: Every response includes receipts you can open instantly.
Voice-native flow: Ask naturally in your language; Memora handles the rest.
Tight sponsor fit: Elastic for fast, precise search; Google Cloud for secure storage; ElevenLabs + OpenAI + Fireworks for a delightful and reliable experience.
What we learned
Users trust answers when they’re grounded with proof.
Separating the pipeline into Plan → Search → Synthesize makes the system robust and debuggable.
Strong defaults (short-lived signed URLs, server-side filters, minimal
_source) provide real privacy without slowing the demo.
What's next for Memora
Hybrid retrieval & reranking: Add kNN + rerank for even smarter results at scale.
Automations: Proactive “before it expires” briefings and weekly evidence-backed recaps.
Deeper sources: Calendar/email connectors and OCR for scanned docs.
Mobile app & offline cache: Faster capture and on-the-go answers, still private by default.
Colleborate with Google Cloud and other storage companies: Batch import the photos and information to Memora
Built With
- ai
- cloud-run
- csp-headers-dev-acceleration-(sponsors):-windsurf/devin-(ai-coding-assist)
- css
- docker
- elasticsearch
- elevenlabs
- elevenlabs-(multilingual-stt-+-optional-tts)
- eslint/prettier-ai-&-speech:-openai-(planner-+-grounded-answers)
- fireworks
- fireworks-ai-(multilingual-embeddings)-search-&-data:-elasticsearch-8-(lexical-+-nested-docs;-hybrid-ready)
- google-cloud
- google-cloud-storage-(artifacts
- https/tls
- manager
- next.js
- next.js-(app-router)
- node.js
- openai
- optional)-pub/sub-&-cloud-scheduler-security:-server-side-user-scoped-es-filters
- pnpm
- pub/sub
- react
- secret-manager
- source.includes-minimization
- storage
- tailwind
- tailwind-css-runtime-&-tooling:-node.js
- thumbnails-via-short-lived-v4-signed-urls)-backend-orchestration:-convex-(actions/state)-or-cloud-run-(http-services)-google-cloud-services:-cloud-storage-(pap-+-cmek)
- typescript
Log in or sign up for Devpost to join the conversation.