Inspiration
We live in an age of "write-only" memories. People journal their thoughts, stresses, and wins, but they rarely look back to find patterns. We realized that traditional journaling is static—it listens, but it doesn't speak back. We wanted to create a mirror that reflects not just your face, but your emotional trajectory. We asked: "What if you could talk to a wiser version of yourself who remembers everything you've ever felt?"
What it does
Memory Mirror is a "Thin-Shell" application where Google Gemini acts as the core operating system. It performs three key functions:
1.Smart Logging: It accepts raw thoughts (text) and uses Gemini to structure them into JSON, automatically tagging emotions and generating summaries.
2.Emotional Analytics: It visualizes your mental health trends over time on a dynamic dashboard, helping you spot burnout or happiness triggers.
3.The "Future Self" Chat: The flagship feature. It uses your entire memory history as context to simulate a conversation with your "Future Self" from 5 years in the future. This persona offers advice, empathy, and reassurance by citing specific past events you logged.
How we built it
We followed a "Thin-Shell, Thick-AI" architecture:
Frontend: Built with Python and Streamlit for a responsive, neon-glassmorphism UI.
The Brain (Track 1): We used Google Gemini 1.5 Flash. The app relies on Gemini for 90% of its functionality—from parsing unstructured text into strict JSON for our database to role-playing the empathetic "Future Self."
Data Handling: We used Pandas for real-time analytics and a lightweight JSON storage system for persistence.
Prompt Engineering: We utilized advanced system prompting to force Gemini to adopt a specific "2030 Persona," ensuring it cites real dates and emotions from the user's history rather than giving generic advice.
Challenges we ran into
Structured Output: Getting the LLM to consistently return valid JSON for the dashboard without breaking the frontend was difficult. We had to refine our prompts to enforce strict formatting.
API Quotas: We faced limit: 0 errors with the Gemini 2.0 Flash model and had to pivot quickly to gemini-1.5-flash to ensure stability for the demo.
Humanizing the AI: Initially, the "Future Self" sounded robotic. We had to tune the temperature parameters and system instructions to make it sound nostalgic and truly empathetic.
Accomplishments that we're proud of
Deep Context Awareness: The moment we got the "Future Self" to say, "I remember you were stressed on Dec 13th," was magical. It proved the context injection was working perfectly.
Real-Time Visualization: Seeing a user's text input instantly convert into a bar chart of emotions without any manual tagging.
What we learned
What's next for Memory Mirror
Voice Integration: Allowing users to talk to their future self via audio.
Vector Database: Migrating from JSON to a Vector DB (like MongoDB Atlas Vector Search) to handle years of memories efficiently.
Therapist Handoff: Detecting crisis keywords and gently suggesting professional help when necessary.
Log in or sign up for Devpost to join the conversation.