Inspiration
The inspiration for this project came from a simple but deeply human problem: many important memories do not survive as complete stories. They remain as fragments: a smell, a gesture, an old ticket, a hallway, a brief encounter, or a sudden emotional echo. Traditional journals and note apps can store those fragments, but they do not help users understand how those pieces relate to each other, or how they gradually form a readable life narrative.
That is the problem MemoryCapsule is designed to solve. We did not want to build “another AI journal.” We wanted to build a product that helps users reconstruct memory, identify clues, organize chapters, and ultimately turn scattered moments into a visual story map.
This was also a very natural fit for MeDo, because the idea required much more than a single page or a single model call. It needed repeated iteration across product logic, frontend interaction, backend orchestration, and AI-assisted visualization.
What it does
MemoryCapsule is an AI-powered memory reconstruction app. Users can enter a memory fragment and optionally upload an image as supporting evidence. The system extracts clues from both text and images, including people, places, environment, timing, objects, and emotional cues, then checks for similar memories and decides whether the moment should:
- join an existing chapter
- become a new chapter
- or remain as a standalone fragment
After that, AI helps reconstruct the memory into a more coherent narrative while preserving its meaning, and saves it into a visual Story Map.
The core problem it solves is not “how to write more text,” but “how to help users see the structure behind fragmented memories.” That is the main value of the product.
How we built it
We used MeDo as the primary co-building tool for this project. It was not generated from a single prompt. Instead, it was built through repeated multiturn iteration across product logic, frontend interaction, backend orchestration, and AI flow design.
We used MeDo to help with:
- product information architecture
- page flow and interaction logic
- Supabase Edge Function orchestration
- the end-to-end pipeline for clue extraction, deduplication, reconstruction, and chapter placement
- the structure and rendering logic of the Story Map
- the direction for multimodal image understanding
Technically, the app uses a fullstack architecture:
- React for the frontend
- Supabase for storage and backend services
- Supabase Edge Functions for AI orchestration
- a custom graph layer for the Story Map
We also broke the AI workflow into multiple stages:
- memory input
- clue extraction
- clue fusion and review
- deduplication
- reconstruction
- chapter placement
- save and Story Map rendering
If we had to summarize how we used MeDo, the most accurate answer is this: MeDo did not just help write code. It helped shape the product itself through sustained multiturn collaboration.
Challenges we ran into
The biggest challenge was not simply “making AI work,” but making it trustworthy enough. Memory is subjective and fragile. If the system fails in any of these areas, users lose trust immediately:
- merging different memory threads incorrectly
- treating weak thematic overlap as duplication
- inventing details during reconstruction that the user never provided
- over-filtering the Story Map until moment fragments disappear
- misaligning chapter recommendations and default UI state
Multimodality was also a challenge. Images should not override the user’s own narrative. They are better used as supporting evidence, not as the main storytelling source. That is why we kept refining the architecture around the principle that text defines meaning, while images contribute clues.
Another challenge was completeness. This project is not a single feature. It is a full chain that connects input, extraction, deduplication, reconstruction, saving, and visualization. A small flaw in one stage can degrade the entire experience.
Accomplishments that we're proud of
What we are most proud of is not just building “an app that can call AI,” but building a full memory reconstruction pipeline. MemoryCapsule can now:
- extract structured clues from fragmented memories
- support multimodal clue fusion with uploaded images
- check for similar memories and duplication risk
- decide whether a moment belongs to an existing chapter, a new chapter, or a standalone fragment
- generate a Story Map that is actually readable after saving
If we had to highlight one “most impressive feature MeDo helped generate,” it would be the entire Story Map pipeline. It is not a standalone page. It is the result of connecting clue extraction, memory reconstruction, chapter placement, saving, and graph visualization into one coherent experience. That is what transforms the project from a memory note tool into a narrative memory system.
That is also what we most want judges to notice: MeDo did not just help us make isolated screens. It helped us build a product capability with real structure.
What we learned
The most important thing we learned is that for a product like this, the value of AI is not “how beautifully it writes,” but “whether it helps users understand structure.” If the system only rewrites user input into more literary language, it has not actually solved the problem. What really matters is:
- which clues deserve to be preserved
- which memories belong to the same thread
- which moments should become chapters
- which moments should remain fragments
- how those memories form bridges between one another
We also learned that MeDo’s multiturn building style is especially well-suited for this kind of app. The hard part is not creating a button or a page. The hard part is making the entire chain become stable, trustworthy, and coherent through repeated iteration.
What's next for MemoryCapsule
Next, we want to push MemoryCapsule beyond a strong hackathon prototype into a more mature product. The next priorities include:
- stabilizing chapter placement and default recommendation behavior
- improving the Story Map so chapters, fragments, and bridges feel more balanced
- strengthening multimodal capability so image analysis and clue fusion are more reliable
- improving chapter naming to reduce formulaic outputs
- increasing overall product polish so users can browse and revisit their life patterns more naturally
Longer term, we want MemoryCapsule to become more than a place to store memories. We want it to become a system that helps people organize life experience and understand their personal narrative.
Built With
- medo
- react
- typescript
Log in or sign up for Devpost to join the conversation.