Inspiration
Modern digital life is fragmented. Our memory is spread across Gmail threads, random Chrome searches, screenshots we forgot we took, voice notes, and call recordings that are never revisited. Existing AI assistants answer general questions well, but they’re terrible at answering questions about your own life. We wanted to build a system that works like Google—but only for your personal digital footprint, with privacy and recall as the core primitives. OneDrop was inspired by the simple idea that your data already contains your memory; it just needs a better interface.
What it does
OneDrop is a multimodal, voice-based personal memory agent.
You can talk to it and ask questions like:
- "Did I receive any email from Sequoia Capital?"
- “When did I last discuss this idea over a call?”
- “Find that screenshot I took about AI infra pricing.”
OneDrop indexes and reasons over:
- Gmail
- Chrome browser history
- Photos and screenshots
- Audio and call recordings
It uses multimodal RAG to retrieve relevant context and respond conversationally via voice (powered by ElevenLabs). Instead of searching individual apps, users query their entire digital footprint as a single, unified memory layer.
How we built it
Vibe-coded using Raindrop-MCP powered Gemini CLI, Kiro and Raindrop Code.
- LiquidMetal AI’s Raindrop platform for multimodal ingestion, embedding, and RAG orchestration
- JavaScript + React + Vite for a fast, modern frontend
- WorkOS for authentication and user management
- ElevenLabs for natural-sounding voice interaction
- HeroUI for clean, minimal interface components
- React/Framer Motion for smooth, natural transitions
- Fastify for a lightweight, high-performance backend
We designed the system so all modalities—text, images, and audio—are indexed into a unified retrieval layer, enabling cross-modal reasoning rather than siloed search.
Challenges we ran into
Signal vs noise: Personal data is messy. Browser history and screenshots are especially noisy, and naive retrieval leads to irrelevant results.
Latency: Voice-based interaction demands fast retrieval and generation; anything slow breaks the experience.
UX clarity: Explaining “personal RAG over your life” without overwhelming users required careful interface and prompt design.
Privacy boundaries: Designing access patterns that feel powerful without feeling invasive.
Accomplishments that we're proud of
Built a working end-to-end multimodal memory agent. Achieved cross-modal recall (voice → image → email → browser context). Created a natural voice interface that feels conversational rather than robotic
Successfully leveraged LiquidMetal’s Raindrop platform to unify multiple data types into a single RAG system. Kept the system modular and extensible instead of hard-coding use cases
What we learned
- Personal AI is fundamentally different from general AI—recall quality matters more than model cleverness.
- Multimodal RAG is powerful, but only if retrieval is aggressively filtered and contextual.
- Infrastructure choices matter more than model size when building real-world AI systems.
- Users don’t want “AI magic”—they want trustworthy memory.
What's next for OneDrop
On-device and encrypted storage options for stronger privacy guarantees
Finer-grained memory controls (what to remember, forget, or summarize) - powered by Raindrop's SmartMemory.
Calendar, Slack, and file system integrations
Proactive memory surfacing (reminding users of relevant past context at the right moment)
OneDrop aims to become a complete personal memory agent, not just another assistant—something that augments human recall rather than replacing thinking.
Demo credentials
Username- judge@liquidmetal.ai Password- onedrop
Built With
- fastify
- javascript
- raindrop
- react
- vite
- workos
Log in or sign up for Devpost to join the conversation.