Memento

Your calendar shows you how busy you are. Memento finds the gaps and nudges you to capture what's beautiful in between. By night, AI turns your ordinary day into a story worth keeping.


Inspiration

Open your camera roll and scroll through the last month. You'll find concerts, birthdays, maybe a vacation. Now try to find last Tuesday. What did you eat for lunch? Who were you walking with between classes? What did the Slope look like at 4pm?

You don't remember. Nobody does.

We capture the grand and let the ordinary disappear — not because ordinary moments aren't meaningful, but because they don't feel worth capturing in the moment. It's only months or years later, when you're scrolling back, that you realize: those were the moments. The late night in Gates after a brutal problem set. The random golden hour on the Slope. The post-class coffee that turned into a two-hour conversation. Gone, because they felt too normal to photograph.

Dario Amodei wrote about AI helping people find meaning in their lives. We believe meaning doesn't only live in milestones — it hides in the ordinary moments we're too busy to notice. Memento is the nudge to notice them.


What it does

Memento connects to your Google Calendar, reads your schedule, and finds the in-between — the 30 minutes between lectures, the walk to the library, the gap before dinner. Instead of showing you how packed your day is, it fills those gaps with hyper-specific, context-aware prompts to capture a small moment.

These aren't generic "take a photo" reminders. Memento knows where you'll be (from your calendar), what time it is, and what the context looks like — and it gives you a nudge that feels like a friend texting you:

"You just survived compilers in Gates — grab a victory coffee shot at Temple of Zeus, it's 2 minutes away."

"The Slope at golden hour is undefeated. You know the shot. Go get it."

You tap the prompt, snap a photo, add a one-line caption, and you're done in 20 seconds. Back to your day.

By the end of the day, you have 5–6 real, unfiltered moments from your actual life. And here's where AI comes in: Claude reads your captured moments — the timestamps, locations, captions, and the arc of your day — and generates a short narrative. Not a photo grid. A story.

"Your Saturday started slow — coffee at Botanist at 9am, just you and a book. By noon you were at the Farmers Market picking up flowers. The afternoon was quiet, a study grind at Mann. But then golden hour hit and you ended up at the Slope, watching the sun drop behind the lake. Not a bad day."

That's a Tuesday you'll actually remember.

How the prompt matching works

Given a user's context, Memento selects the best nudge through a layered filter:

  1. Location match — if the user's calendar event has a location, match to the nearest prompt zone (\(d < r_{geofence}\))
  2. Time filter — match prompts tagged for the current time of day (morning, golden hour, night, etc.)
  3. Weather filter — match prompts to current conditions (sunny, snowy, rainy)
  4. Dedup — suppress prompts shown in the last 7 days
  5. Fallback — if no location matches, serve time-based generic prompts

For the hackathon, we seeded 142 hand-curated prompts across 34 Cornell and Ithaca locations, each with GPS coordinates and contextual tags.


How we built it

Layer Technology Purpose
Framework Next.js 14 (App Router) Mobile-first web app with SSR
Auth Clerk Google OAuth + calendar permissions
Calendar Google Calendar API Read schedule, detect gaps \(\geq 15\) min
Database Supabase (Postgres) Store moments, prompts, user data
Storage Supabase Storage Photo uploads
AI Claude API (Sonnet) Recap narration + dynamic prompt generation
Styling Tailwind CSS Rapid UI iteration
Deployment Vercel Instant deploys, shareable demo link

Core architecture

The timeline engine pulls today's calendar events via the Google Calendar API, identifies gaps of \(\geq 15\) minutes between events, and matches each gap to a contextual prompt from the curated bank. The matching function takes a context vector — \((\text{lat}, \text{lon}, t, w, a)\) for location, time, weather, and activity — and filters the prompt bank for the best candidate.

The Claude AI layer operates in two modes:

  • Recap narration — at end-of-day, we send the user's captured moments (timestamps, captions, locations) to Claude with a system prompt requesting a warm, reflective 3–4 sentence narrative
  • Dynamic prompt generation (proof-of-concept) — given calendar context + location + time, Claude generates a nudge as specific as our curated ones, demonstrating how the system scales beyond Cornell

Challenges we ran into

Calendar location data is messy. Most students don't add locations to their events — they write "CS 4120" not "CS 4120, Gates Hall Room 114." We built a lightweight mapping layer that associates common Cornell course prefixes and building names with GPS coordinates, and falls back to time-based prompts when location inference fails.

Prompt tone was harder than the code. Our first drafts were too polite and app-like — "Would you like to capture this moment?" — and felt ignorable. We iterated toward a more direct, friend-texting-you voice — "The Slope at golden hour is undefeated. Go get it." — which felt much more compelling. Prompt copywriting turned out to be as important as the technical architecture.

Balancing AI response time with UX. We didn't want users staring at a loading spinner while their day story generated, so we implemented a streaming approach that reveals the narrative line by line — which actually made the experience feel more intentional and reflective.


Accomplishments that we're proud of

  • [x] Complete end-to-end flow in under 7 hours — from Google Calendar OAuth to contextual prompt matching to photo capture to AI-generated daily narrative
  • [x] 142 hand-curated prompts across 34 Cornell and Ithaca locations, each with GPS coordinates, time/weather/activity context, and a specific capture style
  • [x] Claude-powered recap narration that transforms a photo grid into a story of your day with a single API call
  • [x] The anti-calendar framing — "your calendar shows you how busy you are, we show you what's beautiful in between" — resonated with everyone we showed it to during the hackathon

The prompt bank is something we're genuinely proud of. Every prompt is something we'd actually want to receive:

"Find the most chaotic whiteboard in Gates right now — future you will laugh at what was on it."

"Point at the Cascadilla Gorge sign. Classic trail selfie. You're doing it."

"Friday night at Fuertes Observatory — you're looking through a telescope that's been showing people the stars for 50+ years. Photograph the setup."

These feel like messages from a friend, not notifications from an app.


What we learned

The AI recap elevated everything. Without it, Memento is a nice nudge-and-capture tool. With it, your day becomes a story — and stories create emotional attachment in a way that photo grids don't. A single Claude API call transformed the end-of-day experience from "here are your photos" to "here's who you were today."

Specificity is the product. Generic prompts get ignored. Location-specific, time-aware, context-rich prompts get tapped. This has direct implications for the RLHF pipeline we'd build next — the signal isn't just did they capture or not, it's how specific was the prompt that got captured vs. the one that got dismissed.

Building with the Claude API felt natural for this use case. Giving Claude a set of moments with metadata and asking for a narrative isn't summarization — it's storytelling with constraints. Claude handled the tone (warm, reflective, not cheesy) better than we expected.


What's next for Memento

Smarter prompts through learning. Every capture and every dismissal is a training signal. We want to build an RLHF pipeline that tracks which prompts resonate with each user and refines selection over time. The curated prompts aren't throwaway work — they're seed training data for a generative system.

Full dynamic prompt generation. Move from Claude as proof-of-concept to Claude as the primary prompt engine. Given any user's \((\text{location}, \text{time}, \text{weather}, \text{calendar context}, \text{capture history})\), generate a personalized nudge on the fly — for any campus, any city, any commute.

Weather and real-time context. Integrate a weather API so prompts adapt in real-time — "The Slope covered in snow hits different" only fires when it's actually snowing. Same for golden hour calculations and seasonal content.

Push notifications. The current demo uses in-app nudges. Real retention requires real push notifications — the nudge needs to meet you where you are, not wait for you to open the app.

Weekly and monthly compilations. Claude narrates your day; the natural extension is weekly recaps and monthly retrospectives. Over time, Memento becomes a living journal that writes itself.

Campus expansion. We start hyperlocal because specificity is the product. "Grab a shot of the gorge" works; "take a photo of nature" doesn't. We'd onboard new campuses one at a time with local prompt curators, then let AI learn and expand from that seed. Every campus has a Slope. Every city has a gorge. The prompts just need to know where they are.


Built for: 2026 Cornell Claude Builders Club Hackathon — Creative Flourishing track

Social Impact: Memento is deliberately not a social platform. No likes, no followers, no public feed. It's a private, reflective tool designed to help people find meaning in their everyday lives without exploiting attention or creating social comparison. Miss a nudge? No guilt, no broken streak. We center human dignity over engagement metrics.

Built With

  • claude
  • cursor
Share this project:

Updates