Project Story

Inspiration

If you ship a product, you're expected to run UX research.

Interview users. Recruit participants. Schedule sessions. Analyze recordings. Synthesize insights.

But for early-stage teams, that process is expensive and slow. A single round of moderated research can take weeks and cost thousands of dollars. Many founders don’t have access to recruiting pools or dedicated researchers — they test with friends, classmates, or whoever is available.

That leads to small, homogeneous samples and blind spots in critical UX flows.

We kept asking:

What if you could simulate hundreds of diverse users instantly — before shipping?

PersonaLab was born from that idea: making UX research faster, more scalable, and accessible through AI personas.


What it does

PersonaLab is an AI-powered UX research platform that simulates user behavior using psychology-grounded AI personas.

Teams upload a sequence of screenshots from a product flow — no SDK or code integration required.

We generate AI personas defined by behavioral traits (based on OCEAN personality dimensions) and contextual modifiers like:

  • “In a rush”
  • “First-time buyer”
  • “Price-sensitive”

These personas navigate the flow step-by-step and:

  • React in character
  • Flag confusion and hesitation
  • Identify drop-off risks
  • Suggest UX improvements

Instead of raw chat logs, PersonaLab structures outputs into friction points, risk indicators, and actionable fixes — delivered in minutes.


Challenges we ran into

1. Screenshot-only navigation

Without DOM access or live interaction data, the agent has to infer UI meaning purely from pixels. Teaching the system to interpret layout, hierarchy, and clickable intent from static screenshots required careful prompt engineering and visual reasoning design.

2. Figuring out agentic navigation

We had to design a system where personas don’t just “comment” on screens — they move through them step-by-step with consistent internal logic. Creating believable hesitation, decision-making, and drop-off behavior while keeping outputs structured and reproducible was one of our biggest technical challenges.

3. Integrating ElevenLabs voice agents

We experimented with voice-based personas to simulate interview-style feedback. Getting natural speech output while preserving persona consistency and structured insight extraction required balancing conversational realism with system constraints.

Each of these challenges pushed us to think deeply about how AI can simulate behavior — not just generate responses.

Built With

Share this project:

Updates