Smart Track — A morning briefing for the Lofty AI-Native PM track From a 9-widget dashboard to a 4-minute morning playlist. Same Lofty AI underneath. New front door.

Inspiration The brief asked one question: "How should users first experience Lofty as an AI-native platform?"

We watched working real estate agents open Lofty in the morning. The platform is genuinely powerful — lead scoring, smart plans, hot sheets, named AI agents (Aria, Milo, Vox), behavioral flags, transaction deadlines. All of it ships. All of it works.

But the morning experience is a nine-widget dashboard, every panel screaming "look at me," and the agent has to mentally rank all of it before making a single decision. One realtor told us she spends roughly 38 minutes scanning before her first real action of the day.

The intelligence existed. The ritual didn't. That gap — between "AI is in the product" and "AI is the product" — became Smart Track.

What it does Smart Track replaces the dashboard with a four-minute morning playlist. Each morning, three AI agents work overnight, rank the day's three or four real decisions, draft the action (an email, a call script, a campaign), attach a confidence score, and present them one card at a time. The user's job becomes editorial review: approve, skip, or edit. Same Lofty data. Inverted contract.

The Impact: If an agent spends 38 minutes in a legacy dashboard vs. 4 minutes on Smart Track, the annual savings for 250 working days is:

(38 min - 4 min) x 250 days = 8,500 minutes/year

Total savings: ~142 hours per year

That’s roughly three working weeks per agent per year. With ~1.6M working US realtors, the upper bound is the kind of number that justifies the rebuild.

How we built it Three layers, each with one job:

Layer 1 — Source (Lofty, today): Lead scoring, behavioral flags, smart plans, hot sheets, transaction deadlines, IDX activity, CRM. We didn't recreate any of it. The mock getBriefing() service is a contract — swap its body for a real fetch('/api/briefing') and nothing else changes.

Layer 2 — Ranker (what we added): A scoring layer that picks today's top 3–4 decisions, drafts the artifact, and attaches a per-agent confidence score. The persona-as-attribution pattern (Aria for sales, Milo for relationships, Vox for marketing) is accountability architecture, not anthropomorphism — every action is traceable to which agent proposed it.

Layer 3 — Surface (what the user sees): A linear focus-mode UI: one card at a time, single-decision, segmented progress bar. React 19 + Vite + Tailwind v4 + Framer Motion. Tokens sampled directly from the live Lofty dashboard.

Key Features:

"While you slept" panel: Autonomy proof. Shows what the agents did before the human even logged in.

"Held back" panel: Judgment proof. Tells the user what the agents didn't surface today, and why.

Voice mode: Reads the card aloud and accepts spoken approve/skip.

Brief the Agents: End-of-day voice debrief to capture "off-market" info and route it back to the CRM.

Challenges we ran into Designing for the AI-skeptic: With a median agent age of 60, we focused on plain English, one decision at a time, a permanent "escape hatch" to the classic dashboard, and "why this?" explainability.

Speech recognition reliability: Built a graceful fallback for webkitSpeechRecognition to ensure live demos never fail.

The pivot from "new AI" to "the missing ritual": We reframed the pitch from "new intelligence" to "the missing surface," leveraging Lofty’s existing AI as a strength rather than trying to replace it.

Pixel-faithful Lofty UI: Rebuilt the classic 9-widget dashboard from scratch for a high-impact "before and after" comparison.

Accomplishments we’re proud of A fully interactive React app (not a Figma file).

A 7-slide pitch deck in pure HTML, exported to a 3.7 MB .pptx with embedded speaker notes.

A Design & GTM rationale doc answering all core metrics: Time to First Action, Track Completion Rate, and GCI Lift.

Voice in and voice out functionality.

What we learned The trust UI is harder than the AI: Success requires artifact previews, signal explainability, and calibrated confidence.

"Restraint" is a feature: Showing what the AI chose not to surface builds more credibility than constant noise.

Agent attribution beats personality: Users care about who to hold accountable for a recommendation, not a fictional backstory.

The closed loop is the moat: Learning from human feedback (Step 3) is the ultimate defensibility.

What's next for Smart Track P0: Inline artifact editing with diff vs. AI's original.

P0: Learning loop wired to the ranker (every edit becomes training data).

P1: Multi-agent collaboration UI for conflicting actions.

P1: Mobile / PWA for on-the-road agents.

P2: Team mode for Broker-level views.

P2: Counterfactual chat ("Why didn't you suggest I follow up with Michael?").

AI tools used: Claude for mockups and animation , Cursor primary IDE

Built With

  • framer
  • lucide
  • react19
  • tailwindcss
  • vite
Share this project:

Updates