About the Project

What Inspired Us

We all move fast.

Between classes, deadlines, the people we love, and everything in between — days fill up and slip away even faster. Most of us never meant to stop journaling. Life just got full, and somewhere along the way, the blank page started feeling like one more thing to figure out at the end of an already long day.

We lived that. And we started asking why.

The answer wasn't laziness. It was something quieter — the feeling that ordinary life doesn't feel cinematic enough to document. That nothing interesting enough happened today to write about. We believed that feeling was wrong, and worth challenging.

We also had a bigger conviction: that AI models, as their context windows expand toward holding the full arc of a person's life, won't just make us more productive. They'll help us understand ourselves more deeply — recognising patterns across hundreds of days, noticing the things we've stopped noticing about ourselves. Journie is the foundation of that. The daily practice that makes that future possible.


How We Built It

Journie is a React 18 + Vite single-page frontend with a NestJS 10 backend, communicating over a clean REST API. Supabase handles authentication on the client and serves as our database and photo storage layer. Framer Motion 12 drives every transition with a reduced-motion fallback built in from day one.

The piece we're most deliberate about is the three-stage AI inference pipeline — tags → writer → memory:

Stage 1 — Tags extraction. When a user triggers journal generation, the first stage uses a lightweight vision model (qwen2.5-vl-3b-instruct via DashScope, with GPT-4o as fallback) to extract semantic tags from each moment's photos. These are persisted to moment_tags and aggregated into daily_tag_summaries. This stage runs fast and cheap because it's doing classification, not narration.

Stage 2 — Writer generation. The writer receives everything: the user's persona profile, the aggregated daily tags, recent confirmed journals for voice calibration, long-term user memories, edit diffs from past journals, and the full ordered moments with re-signed photo URLs. It uses qwen-plus — a larger multimodal model — to produce a structured journal: markdown prose, a daily_achievement, a best_photo reference, and hidden Insights used by the system but stripped before display. If the writer fails, the pipeline writes a graceful draft rather than leaving the user stuck on a spinner.

Stage 3 — Memory processing. After a successful generation, MemoryService runs asynchronously in the background. It distils durable facts from the day's journal into user_memories — the long-term memory store that feeds all future generations. The longer someone uses Journie, the more accurately the writer knows them.

The AI client (AiClientService) prefers DashScope when DASHSCOPE_API_KEY is configured, and falls back deterministically to OpenAI GPT-4o. Without either, the pipeline degrades gracefully to a deterministic journal rather than crashing — a design choice that reflects our broader approach: build for resilience before polish.

Onboarding collects a five-question persona profile — attention_filter, life_chapter, tone_preset, daily_people, and optional additional_context — which feeds the writer prompt as a character it inhabits, not a filter it applies.


The Challenges We Faced

The technical stack gave us fewer problems than we expected — deliberately. We minimised dependencies and avoided overengineering at every turn so we could move fast without breaking things that mattered.

The real challenge was human.

Designing an onboarding survey that captures enough of a person to meaningfully personalise an AI voice — without feeling like a corporate intake form — required us to think seriously about how people describe themselves, what they're willing to share, and how personality actually manifests in the rhythm of someone's prose. We iterated on this longer than anything else. Those conversations pushed us deeper into a question we hadn't fully appreciated: how does AI bridge the emotional gap between machine output and human feeling? That's always seemed like a far-fetched aspiration. Getting meaningfully closer to it, even in a 48-hour sprint, is the part of this build we're most proud of.


What We Learned

Building Journie taught us something we didn't expect.

People aren't bad at journaling because they're busy. They mask what they're feeling — to stay functional at work, to show up for the people they love, to protect a sense of self that a hard day might otherwise chip away at. The weight of that accumulates quietly. Journaling has always been one of the few honest spaces where it can go.

What we discovered is that the barrier isn't desire. It's permission — permission to believe the day was worth remembering. Journie tries to give that permission by lowering the cost of the act to almost nothing: a photo, a mood, thirty seconds. The AI handles the rest.

And looking further ahead — at an agent that learns you across months and years, that traces patterns across hundreds of days, that notices the things you've stopped noticing about yourself — that starts to feel less like an app and more like something genuinely new: a form of attention, sustained over time, that has always been hard to find and easy to lose.

Built With

Share this project:

Updates