Inspiration
I'm an AI engineer. RAG pipelines, Python, SQL — that's my world. I have never built a mobile app. Not once.
Then I read Gabby Beckford's creator brief: "There's still this massive gap between inspiration and action. They stay stuck waiting for permission." And I realized I'd seen this exact problem — not just in her audience, but in myself. I've made vision boards. I've saved Pinterest folders. Nothing ever happened next.
So I decided to build the thing that should have existed all along: a vision board that doesn't just hold your dreams. It moves with you.
What it does
Dream Self is an iOS vision board app where every dream comes with a plan. You pick a photo, name your vision, write why it matters, and define your first actionable step — all in under 60 seconds.
The app tracks your momentum with a daily streak system (complete with freeze tokens so one missed day doesn't erase your progress). It celebrates achievement with a full confetti-and-reflection ceremony when you move a vision from Dreaming to Living. And it uses Google's Gemini AI to generate personalized images of you living your dream — turning abstract goals into something you can actually see.
Every screen is written against an 832-line warm copy guide. The app doesn't say "Task completed." It says "You're making it happen." It doesn't say "No items found." It says "Every big life started with a single dream."
How I built it
Solo. From zero mobile experience. Claude Code was my co-pilot for the entire build.
The stack: Expo SDK 54, React Native, TypeScript, Supabase (auth + edge functions), RevenueCat (subscriptions), MMKV (local-first storage), Zustand (state), React Query (server state), and Reanimated v4 (animations).
AI generation runs through a Supabase Edge Function that calls Gemini 3 Pro with two parallel prompt variations per generation. Credits are tracked server-side in Postgres. Rate limiting, safety filters, and error-specific warm messages are all built in.
Auth supports Apple Sign-In (native) and Google OAuth (browser PKCE). Three-layer storage: encrypted JWT in expo-secure-store, session metadata in MMKV, UI state in Zustand.
Challenges I ran into
Everything was a challenge — I'd never touched React Native before this hackathon. The steepest learning curves were gesture handling (making swipe-to-dismiss and pull-to-reveal feel native), the OAuth flow (Apple's native path vs Google's browser PKCE path require completely different architectures), and getting Reanimated v4 spring animations to feel right across dozens of micro-interactions.
The warm copy was its own challenge. Writing 832 lines of voice-consistent copy across every screen, error state, and edge case took as long as some features took to build.
Accomplishments that I'm proud of
The app is 95% feature-complete — built in weeks, not months, by someone who had never opened Xcode before. It has real AI generation, real subscription infrastructure, real auth, and a design system that holds together across 15+ screens. The celebration flow — where a vision moves from Dreaming to Living with confetti, reflection, and journey stats — is the moment I'm most proud of. It feels like something worth opening.
What I learned
AI tools don't just make you faster at what you already know. They give you access to platforms you never had before. I went from zero React Native knowledge to shipping a production-grade iOS app because Claude Code could bridge the gap between what I understood (systems thinking, architecture, data flow) and what I didn't (JSX, native modules, gesture handlers).
What's next for Dream Self
Smart daily notifications (the last 5% to TestFlight), iOS home screen and lock screen widgets, Apple Watch integration for glanceable streaks, and a deeper gamification layer with XP, badges, and community challenges.
Built With
- claude
- expo.io
- geimini
- react-native
- reanimated
- revenuecat
- supabase
- typescript
- zod
- zustand
Log in or sign up for Devpost to join the conversation.