Inspiration

People generate large amounts of personal data every day—screenshots, voice notes, messages, documents, and journals—but this data remains fragmented and unused. I wanted to explore whether a single AI system could understand these life events together and extract meaningful personal insights.

Life Replay was inspired by the idea of using Gemini’s multimodal reasoning to analyze real-life data and help users better understand patterns in their habits, emotions, and experiences.


What it does

Life Replay is a multimodal personal AI analyst built entirely using Google AI Studio. Users can upload photos, voice notes, chat exports, PDFs, and text files. The app analyzes this data using Gemini to generate structured life insights such as behavioral patterns, emotional trends, and reflective observations.

Rather than acting as a conversational chatbot, Life Replay focuses on insight generation—helping users reflect on their life data in a meaningful way.


How I built it

The entire AI logic of Life Replay is implemented using Google AI Studio and Gemini models. Google AI Studio was used to design, test, and refine multimodal prompts that allow Gemini to reason across images, audio, and text within a single context.

Gemini is used to:

  • Interpret screenshots and images
  • Understand voice notes and audio reflections
  • Analyze text from messages, journals, and documents
  • Combine insights across multiple modalities and timeframes

Google AI Studio enabled rapid iteration of prompts and responses, making it possible to build and validate the core intelligence of the app efficiently.


Challenges and learnings

One of the main challenges was guiding Gemini to generate deep insights instead of surface-level summaries. This required careful prompt design and experimentation within Google AI Studio.

Another challenge was maintaining clarity and privacy when working with personal data. Through this project, I learned how to structure multimodal prompts responsibly and how powerful Gemini can be when reasoning across diverse data types.

This project significantly improved my understanding of multimodal AI, prompt engineering, and building real-world applications using Google AI Studio.


Impact and future scope

Life Replay demonstrates how Google AI Studio and Gemini can be used to build reflective, user-centric AI applications. In the future, this approach could be expanded to support long-term personal knowledge systems, well-being insights, and productivity analysis—while keeping user data private and controlled.

Built With

  • gemini-api
  • gemini-multimodal-models-(gemini-flash)
  • google-ai-studio
  • prompt
  • react
  • typescript
  • web-apis
Share this project:

Updates