Inspiration
A lot of the people who shape us carry stories, sacrifices, traditions, phrases, recipes, and advice that never get preserved. In immigrant families, that loss can happen quietly across generations. Children grow up speaking English while grandparents speak Hindi, Punjabi, Korean, Vietnamese, Arabic, Spanish, or another language that slowly becomes harder to understand. A voicemail becomes something you can hear but not fully translate. A recipe becomes ingredients without the voice behind it. A video becomes another forgotten clip without context.
That feeling is the reason we built Before I Forget
Gratitude should not only happen after loss. It should happen while there is still time to ask, listen, and preserve what made someone feel like home.
Ask before it becomes impossible
What it does
Before I Forget is a memory-preservation platform built around people, not files.
Instead of opening a dashboard full of random uploads, users create profiles for loved ones: Nani, Baba, Papá, Halmeoni, a parent, a sibling, a mentor, or an overseas friend. Each profile becomes a living archive of that person’s stories, photos, videos, recipes, voice notes, translated conversations, and AI-generated keepsake cards.
A user can open Nani’s profile and see memories connected to her: a voice note in Hindi, a photo from a family kitchen, a video from an old phone, a story about chai before school. When a voice memory is uploaded, the platform can generate a transcript in the original language, translate it into English, and turn the memory into a keepsake card with a summary, cultural context, emotional lesson, and gratitude message.
The goal is not to make another social app or cloud drive. We wanted the app to feel like opening a digital memory box. Warm colors, paper-like textures, serif typography, and keepsake-style cards were chosen so the experience feels personal instead of corporate.
Most apps treat translation as an extra feature. We designed multilingual memory as the default. A grandchild who no longer fully understands their grandmother’s language can still reconnect with her words, her voice, and the meaning behind what she said.
How we built it
Before I Forget was built for the Serverless with Lambda track. The frontend is a React app deployed on Vercel, while the core processing infrastructure runs on AWS serverless services.
We use Amazon API Gateway as the secure entry point between the frontend and backend. AWS Lambda handles API logic, presigned upload URL generation, event processing, media metadata updates, and pipeline functions. Files upload directly from the browser to Amazon S3 using presigned URLs, which keeps large media payloads out of Lambda and avoids API Gateway timeout and payload-size issues.
S3 is the media backbone. Photos, audio, videos, generated thumbnails, and generated audio keepsakes are stored there. Amazon DynamoDB is the metadata backbone. We use a single-table, profile-centric design organized around User → Profile → Memory → Media, so every file stays connected to the person and memory it belongs to.
The app has four event-driven media pipelines:
- Story pipeline: orchestrated with AWS Step Functions, which coordinates Lambda, Amazon Bedrock with Claude, Amazon Translate, DynamoDB, and Amazon Polly. This pipeline generates keepsake summaries, gratitude letters, translated content, and narrated audio.
- Photo pipeline: S3 upload triggers Lambda thumbnail processing and stores photo metadata in DynamoDB.
- Audio pipeline: S3 upload triggers transcription with Amazon Transcribe, translation with Amazon Translate, and transcript metadata storage in DynamoDB.
- Video pipeline: S3 upload triggers Lambda with an ffmpeg layer to generate thumbnails and preview metadata.
Every media record includes an analysisStatus field, such as pending, processing, completed, or failed, plus an analysisResult field. This lets us add future AI processing, such as photo search, video summaries, memory clustering, or conversational memory chatbots, without redesigning the database.
Architecture Diagram
Our architecture diagram is designed to be read left to right. On the left, the user interacts with the React frontend on Vercel. The frontend calls API Gateway and Lambda for secure backend actions, then receives presigned S3 URLs so media can upload directly to S3 without exposing AWS credentials in the browser.
The center of the diagram shows the four parallel media pipelines: story, photo, audio, and video. The story lane is the most advanced because Step Functions orchestrates multiple asynchronous steps across Bedrock, Translate, Polly, Lambda, and DynamoDB. The photo, audio, and video lanes are event-driven from S3 and each process media differently.
On the right, S3 and DynamoDB are the convergence points. S3 stores the actual media and generated files, while DynamoDB stores the profile, memory, media, transcript, status, and generated-output metadata.
The most architecturally important detail is the analysisStatus async pattern. It means every memory can move through processing states now, and later we can add new AI analysis steps without changing the core data model.
Challenges we ran into
One challenge was handling media uploads correctly in a serverless architecture. Large audio and video files should not pass through Lambda or API Gateway, so we used presigned S3 URLs. That made the upload flow more complex, but it also made the architecture more scalable and secure.
Another challenge was multilingual memory. Real families code-switch. A grandparent might speak mostly Hindi, then add English words, names, or cultural phrases that do not translate cleanly. We approached this by keeping both the original transcript and the translated version, instead of replacing one with the other.
The hardest product challenge was emotional pacing. A normal app tries to maximize uploads, clicks, and engagement. We were building something slower. We had to design the interface so it gave people space to listen, read, and reflect, rather than rushing them through another productivity flow.
Accomplishments that we're proud of
We are proud that the app is built around loved ones instead of folders. That single product decision shaped the data model, the frontend, and the emotional experience.
We are also proud of the architecture. The four-pipeline design makes each media type feel supported, while S3 and DynamoDB keep the system clean and scalable. The analysisStatus pattern gives us a future-ready foundation without overbuilding the demo.
The multilingual-by-default approach is another part we care about. The app does not assume that memory only matters in English. It preserves the original language alongside the translation, because sometimes the way someone says something is part of the memory.
We are also proud that the design does not feel like a generic dashboard. It feels closer to a keepsake box.
What we learned
Technically, we learned that serverless architecture is strongest when each service has a clear responsibility. Lambda is powerful, but it should not do everything. S3 is better for media storage, Step Functions is better for orchestration, Transcribe is better for speech, and DynamoDB is better for fast metadata access.
We also learned that building with emotion requires restraint. Not every memory needs to be summarized, ranked, or optimized. Sometimes the most important thing technology can do is preserve context: the original words, the voice, the language, the person, and the small details that made the memory real.
What's next for Before I Forget
Next, we want Before I Forget to become mobile-first so families can record memories naturally during calls, visits, holidays, or ordinary kitchen conversations.
We also want to add collaborative family profiles, shared timelines, conversational memory chatbots, AI-assisted storytelling, and personalized reminders based on family traditions. If the platform knows someone loved gardening, tea, cooking, or a specific festival, it could help future generations reconnect through meaningful prompts, activities, or gifts.
Before I Forget is for the grandchild who no longer fully understands their grandmother’s language, the immigrant family separated across countries, the son who realizes his father’s stories are fading, and the families who wish they had asked more questions while they still had the chance.
We don’t preserve files. We preserve their essence.
Built With
- amazon-api-gateway
- amazon-bedrock
- amazon-dynamodb
- amazon-polly
- amazon-transcribe
- amazon-translate
- amazon-web-services
- aws-lambda
- aws-sam
- aws-step-functions
- claude
- ffmpeg
- javascript
- lucide-react
- node.js
- react
- vercel
- vite
Log in or sign up for Devpost to join the conversation.