Inspiration
LifeLens began with a very personal question: “What if we could visually revisit the moments that shaped us?” I built this project in memory of my uncle, a doctor whose life was cut short by cancer. I remember seeing him in his small clinic, wearing his white coat and caring for patients with focus and kindness. Those moments mean everything to me, yet memories fade over time. I wanted to create something that could honor memories something for people who wish they had just one more picture of the moments and people they loved. That idea grew into LifeLens.
What it does
LifeLens takes a written memory and turns it into a cinematic visual scene using structured data and deterministic AI. It reconstructs memories through a pipeline: Memory Text → Metadata Extraction → FIBO Structured JSON → Deterministic Image. No prompt engineering. No randomness, unless you want it. Just structured, interpretable, reproducible storytelling.
How we built it
LifeLens is a full-stack system combining:
Frontend (React / Vite)
- Memory input interface
- Display of emotional metadata
- UI for generating structured JSON
- Rendering + image preview
- Clean storytelling layout
Backend (FastAPI + Python)
- Custom rule-based emotional metadata extractor
- Integration with FIBO’s /v2/image/generate endpoint
- Parsing and storing:
- structured_prompt
- seed
- metadata
- JSON merging + deterministic rendering logic
FIBO Integration
FIBO does the heavy lifting for structured visual reasoning:
- Converts text into a detailed JSON scene
- Handles lighting, camera angle, objects, mood, composition
- Returns a deterministic seed for exact recreation
- Produces cinematic, emotionally aligned visuals This allowed me to focus on building a meaningful user experience around it.
Challenges we ran into
Building LifeLens came with several meaningful challenges. One of the earliest hurdles was getting the FIBO endpoints working correctly the host URLs, authentication tokens, and sync flows all required careful debugging, especially when moving from placeholder URLs to live production endpoints. Handling FIBO’s structured JSON was another challenge, since the API returns the prompt as a serialized JSON string; parsing, validating, and storing it in a consistent way took additional engineering effort. Designing deterministic generation was equally complex: ensuring that the combination of structured_prompt + seed ⇒ the same image, every time meant building a reliable pipeline that avoided hidden randomness. Beyond the technical challenges, balancing emotion with technology demanded thoughtful design. LifeLens needed to feel emotionally intuitive while still being technically precise, and finding that balance took multiple iterations. Finally, crafting an experience rather than just an app required extra care from the UI layout to the pacing of the workflow to help users truly “see” the structure of their memories and connect with the moments they were reconstructing.
Accomplishments that we're proud of
We’re incredibly proud of how LifeLens evolved from a personal memory into a full, end-to-end storytelling system powered by FIBO. One of our biggest accomplishments was successfully integrating FIBO’s /v2/image/generate API and building a deterministic pipeline that transforms emotional memories into structured JSON prompts and, ultimately, cinematic visual scenes. We also created an experience that feels genuinely meaningful, a tool that honors memories rather than just generating images. Designing a UI that transparently displays emotional metadata, the structured prompt, and the final render all in one place was another major milestone, allowing users to see exactly how a memory becomes a visual blueprint. Most importantly, we’re proud that LifeLens demonstrates how AI can support human remembrance, healing, and storytelling in a way that feels personal, respectful, and technologically robust.
What we learned
Throughout building LifeLens, we learned how powerful structured generation can be when compared to traditional prompt engineering. FIBO’s JSON-based approach taught us how visual scenes can become predictable, interpretable, and controllable when expressed through structured data. We also came to appreciate the importance of determinism: understanding how seeds allow the same memory to be recreated reliably opened the door to versioning, refinement, and professional workflows. On the emotional side, we discovered how meaningful it is to extract metadata nostalgia, warmth, subjects, and settings from user memories, and how this enriches the final visual output. Above all, we learned that AI isn't just a tool for creativity it can also serve as a gentle bridge between human emotion and technology, helping us reconnect with stories that matter deeply.
What's next for LifeLens
Looking ahead, we envision LifeLens growing into a full storytelling platform. We plan to expand it with a memory library where users can save and revisit visualized memories over time, as well as editing tools that let users tweak lighting, camera angles, and composition directly through JSON controls. We want to explore voice-based memory capture, turning spoken reflections into structured scenes, and create exportable “memory albums” that compile multiple generated moments into videos or digital books. There is also significant potential for LifeLens to support grief therapy, legacy preservation, documentary previsualization, and family history storytelling. On the technical side, we aim to introduce advanced JSON editing for power users and broaden integration into professional creative workflows. LifeLens has the potential to become much more than an app it can grow into a compassionate, structured, and scalable way for people to preserve the memories they never want to lose.
Built With
- api
- fastapi
- fibo
- httpx
- javascript
- pydantic
- python
- react
- uvicorn
- vite

Log in or sign up for Devpost to join the conversation.