Inspiration

We've both seen how AI tools can shift the way people think, work, and make decisions. But the gap between "this is powerful" and "I actually use it daily" is real. Most tools don't fit naturally into people's routines, and even when they do, they stop at words. They don't drive action.

We believe AI coaching should do more than reframe your thinking. It should help carry the load. The people who'd benefit most are busy professionals navigating real decisions about their careers, health, and relationships, who can’t find the time to sit down, organize their thoughts, and build a plan. They need something they can open on their phone, speak into for three minutes, and walk away with clarity.

That's what we're building. Voice-first, mobile-native, calm by design. No typing, no formatting, no friction. The intelligence handles itself: pattern recognition compounds across sessions, insights are anchored to your own values, and every conversation ends with a concrete next step already on your calendar.

We want to build alongside a community that cares about the intersection of AI, design, and personal development, and we want to ship something real, not just a prototype. Undertone is already deeply specced (PRD, technical architecture, design system, engineering execution plan) and ready to push boundaries through action.

What it does

Undertone is a voice-first AI coaching app that turns spoken reflections into real action. You talk, Undertone listens, transcribing in real-time via Deepgram, and then Claude analyzes what you said against your personal history. The first time you mention a theme, you get a reframe. The second time, Undertone flags an emerging pattern. By the third mention, it generates insights that you can learn from. It's not a journal. It's not a chatbot. It's an intelligence layer that watches for the things you keep coming back to and does something about them.

How we built it

We started by ideating together, aligning on the core insight that AI coaching apps fail when they stop at journaling and never drive real action. From there, we split into parallel ownership streams: one of us owned the base designs and visual identity (the Midnight Gold palette, screen flows, component specs), another owned the technical specification and architecture decisions, then we tackled scaffolding the monorepo and getting the build pipeline working.

Once the foundation was in place, we shifted to feature-based parallel development. The engineering execution plan was designed around vertical slices: each workstream (auth, recording, AI pipeline, actions, subscriptions) was a self-contained unit that one person could own end-to-end, from database schema to API route to UI screen. This let us build simultaneously without stepping on each other: one person could be wiring up Deepgram WebSocket streaming while another was building the insight generation pipeline or configuring RevenueCat and App Store Connect.

Key technologies under the hood:

  • Expo SDK 54 (React Native) for the mobile app, Vercel Edge Functions for the API, Supabase for auth and Postgres, all in a Turborepo monorepo
  • Deepgram Nova-2 for real-time voice transcription, Claude Sonnet for insight generation, OpenAI embeddings + pgvector for pattern detection across entries
  • RevenueCat for subscriptions, Tamagui v2 for the UI, Drizzle ORM for type-safe database access

The monorepo structure and shared types package made the parallel workflow possible. We could agree on API contracts and Zod schemas upfront, then build both sides independently.

Challenges we ran into

  • Scaffolding a monorepo with Expo 54 and React 19 was more painful than expected. We hit peer dependency conflicts immediately: gluestack-ui didn't support React 19.1.0, so we pivoted to Tamagui mid-build. Metro bundler needed custom watchFolders, nodeModulesPaths, and extraNodeModules config to deduplicate React across workspaces. Drizzle ORM pulled in react@19.2.4 as a peer dep, requiring root-level overrides to pin everything to 19.1.0. Babel plugin resolution broke because babel-preset-expo hoisted to the root while expo-router stayed local.
  • RevenueCat + App Store Connect had its own friction. Configuring in-app purchases, linking entitlements, and testing sandbox subscriptions required navigating App Store Connect's notoriously unintuitive UI. Getting TestFlight builds through review meant dealing with missing "Restore Purchases" buttons, permission string wording, and demo account setup — things that aren't obvious until Apple rejects you.
  • Real-time audio streaming to Deepgram over WebSocket from a React Native context required careful state machine design to handle all the edge cases: mic permissions, WebSocket drops, reconnection backoff, and the 15-minute hard limit.

Accomplishments that we're proud of

We're proud of building an app that works end-to-end: from tapping record, to seeing your words transcribed live, to receiving AI-generated insights grounded in your personal history, to having an action show up on your Apple Calendar. That full loop, working on a real device through TestFlight, is something we're genuinely proud of.

We're also proud of the pattern detection lifecycle. It's not just "AI said something smart": there's a deliberate intelligence design where insights escalate over time based on semantic similarity, and actions are only generated when a theme has genuinely recurred. That restraint makes the app feel trustworthy rather than noisy.

What we learned

We learned a lot about the full lifecycle of shipping a mobile app: from ideation and design through development, App Store setup, and TestFlight distribution. It's one thing to build a feature; it's another to deal with provisioning profiles, entitlements, privacy nutrition labels, and App Store review guidelines. We also got deep into AI pipeline design: how to chain embedding generation, vector similarity search, and LLM synthesis into a reliable sequential pipeline with partial failure handling. Prompt engineering for structured JSON output from Claude, with retry logic and regex fallback parsing, was a skill we developed through iteration. On the design side, we learned how much cohesive visual identity matters. The Midnight Gold palette (dark backgrounds, warm gold accents, Cormorant Garamond typography) gives Undertone a distinct feel that makes the AI interactions feel intentional rather than clinical.

What's next for Undertone

We're fleshing out the remaining Undertone functionality: completing the onboarding guided discovery conversation, displaying recognized patterns, polishing visualization animations, and hardening the notification system. After that, integrations are the priority. The pipeline from pattern detection to actionable integrations is what gives Undertone its edge over other AI coaching apps that just help you reframe or become journal entries. Undertone doesn't stop at insight: it does work for you. Apple Calendar is the first integration, but we're eyeing task managers, habit trackers, and health platforms as next targets. The vision: your voice in, real change out.

Built With

Share this project:

Updates