Inspiration
RecallAI was inspired by how humans naturally remember information imperfectly. People often recall only partial details, while most digital assistants require exact inputs. The aim was to build a system that can reason over incomplete recall instead of relying on rigid reminders.
What it does
RecallAI is a multimodal, voice-based cognitive recall assistant that stores personal information from voice input, images, and PDF documents. It extracts key details such as dates and events, structures them as memory, and reconstructs context to answer vague or incomplete user queries.
How we built it
RecallAI was built using Firebase Studio for development and deployment, with the Gemini API handling reasoning and natural language interaction. Inputs from voice, images, and documents are converted into structured memory for reliable retrieval.
Challenges we ran into
Handling incomplete recall without producing incorrect responses was challenging. Integrating multiple input types into a single reasoning pipeline and balancing flexibility with reliability required careful design.
Accomplishments that we're proud of
We built a system that supports context-based memory reconstruction, multimodal memory capture, and confidence-aware recall while maintaining clear non-medical assistive boundaries.
What we learned
We learned how to design AI systems that reason under uncertainty, structure unstructured data, and move beyond keyword-based retrieval toward human-like memory recall.
Log in or sign up for Devpost to join the conversation.