Inspiration

Every digital marketer has felt it — you spend weeks designing an ad, launch it, and after burning through half your budget, you finally learn it wasn't resonating. Traditional A/B testing is expensive by design: you need real impressions, real clicks, and real money just to generate signal.

We asked a simple question: what if AI personas could replace real users entirely — not just for feedback, but for the A/B test itself?

Instead of paying for thousands of real impressions to learn what works, you could generate synthetic personas that match your target demographic — complete with jobs, habits, and preferences — and have them evaluate, compare, and select the best ad for you. No budget spent, no time wasted, same directional signal.

The inspiration came from watching small e-commerce businesses get crushed by wasteful ad spend. They can't afford the budget required for meaningful A/B tests. We wanted to level the playing field — replacing the entire test-with-real-users cycle with AI persona simulation that anyone can run in under a minute.

Existing tools solve parts of this problem. Some generate ad creatives with AI. Others predict performance with synthetic audiences. But none of them close the loop — no mainstream platform connects AI evaluation, generative redesign, and AI re-evaluation into a single autonomous process. That gap is what AdOptimizer fills.


What it does

AdOptimizer implements what we call the Autonomous Synthetic Pre-testing Loop — a fully automated cycle where AI personas evaluate an ad, generative AI redesigns it based on their feedback, and the same personas re-evaluate the improved candidates. Evaluate → Redesign → Re-evaluate — while keeping the human in the loop as the final decision-maker at every step.

Today's tools stop at one piece of this cycle. AI creative generators (Pencil, AdCreative.ai) produce variants but can't evaluate them. AI pre-testing platforms (Neurons, Kantar) score creatives but can't act on the feedback. AdOptimizer is the first to wire evaluation, generation, and re-evaluation into a single closed loop — delivering an optimized creative in minutes, not weeks, with zero ad spend.

The Three-Phase Loop (up to 3 rounds)

Phase 1 — Persona Evaluation & Feedback

  1. Upload an ad image and define your target audience (age range, gender, interests)
  2. The user selects how many personas to generate and the target performance metric (e.g., CTR)
  3. The system generates distinct, realistic personas matching that demographic
  4. Each persona independently evaluates the ad and decides whether they would click — with a plain-language reason and confidence score
  5. Results are aggregated into a simulated CTR
  6. The user can review detailed per-persona feedback and optionally add their own comments
  7. The top reasons for not clicking — combined with any human input — are synthesized into an image editing instruction
  8. Amazon Nova Canvas applies image variation to produce 3 candidate creatives (the original + 2 improved variants), preserving the original composition while targeting the specific issues personas identified

Phase 2 — Persona Re-evaluation

All personas from Phase 1 re-evaluate every candidate head-to-head. Each persona views each variant independently and decides whether they would click. The candidate with the highest CTR is marked as the recommended winner — but the user makes the final call, choosing which candidate to carry forward.

This is the core innovation: AI personas replace both the feedback stage and the A/B testing stage, eliminating the need for real ad spend entirely.

Phase 3 — Iterative Refinement

The selected candidate feeds back into Phase 1. Up to 3 rounds of refinement, each one tightening the creative toward your exact audience.


How we built it

4-Agent Architecture

Four specialized agents, each with a single, well-defined responsibility:

PersonaGeneratorAgent
  → CTREvaluatorAgent × N  (parallel per persona)
    → FeedbackAggregatorAgent
      → ImageVariantGeneratorAgent × 3  (concurrent)
        → CTREvaluatorAgent × N × 3  (re-evaluate all candidates)
          → User selects winner → next round
Agent Role
PersonaGeneratorAgent Creates realistic personas from target settings
CTREvaluatorAgent Each persona evaluates the ad with vision (runs in parallel)
FeedbackAggregatorAgent Aggregates results, generates image variation prompt
ImageVariantGeneratorAgent Calls Nova Canvas to produce 3 candidate images

The CTREvaluatorAgent is used twice per round: once in Phase 1 to evaluate the current ad, and again in Phase 2 to re-evaluate all 3 candidates head-to-head. All agents use Amazon Nova Lite via Bedrock. Image generation uses Amazon Nova Canvas via Bedrock.

Real-Time Streaming UI

Persona evaluations stream live to the UI via Server-Sent Events (SSE). Each persona card slides in as its evaluation completes — not after all are done — giving users a real-time window into how their audience is reacting, one person at a time.

Why Image Variation (Not Generation From Scratch)

Brand identity is a hard constraint in real advertising. Instead of generating images from scratch — which would destroy brand colors, layout, and product placement — we use Nova Canvas image variation to produce improved versions that preserve the original composition. The variation prompt is synthesized directly from aggregated persona feedback, closing the loop between "why they didn't click" and "what to fix."

Stack

Layer Technology
Frontend Next.js 16, Framer Motion, Tailwind CSS
LLM Amazon Nova Lite (Bedrock)
Image Generation Amazon Nova Canvas (Bedrock)
Streaming Server-Sent Events (SSE)
Storage AWS S3
Database MySQL 8.0
Deployment AWS EC2 + Docker Compose

Challenges we ran into

Getting meaningful signal from small persona panels

With 5–10 personas making a binary click decision, simulated CTR collapses into a small set of discrete values (0%, 20%, 40%...). We solved this by:

  • Grounding each persona deeply in their occupation, behavior, and specific interests — so their reasoning is differentiated, not generic
  • Using a 0–100 confidence score alongside the binary decision for richer signal
  • Re-evaluating all candidates with the full persona panel in Phase 2 for consistent comparison

Preventing creative drift across rounds

Early attempts would drift far from the original — losing brand colors, repositioning products, rewriting copy in an unrecognizable style. We fixed this by constraining variation prompts strictly to visual changes only (contrast, CTA prominence, layout adjustments) and tuning Nova Canvas similarity strength to preserve the existing composition.

Keeping the feedback loop from stagnating

By round 2 or 3, the creative is already improved. Generating further signal required the FeedbackAggregatorAgent to look beyond top-line CTR and surface marginal improvements — micro-copy clarity, button affordance, visual hierarchy — rather than repeating the same coarse feedback from the previous round.

Streaming with parallel agent execution

Coordinating SSE output while running parallel agent calls required careful event ordering. Concurrent evaluations complete in arbitrary order, but the UI needs to receive them in a natural, sequential flow. We built a stream controller that buffers and re-orders events before emitting them to the client.


Accomplishments that we're proud of

  • End-to-end in under 60 seconds. Persona generation, parallel CTR evaluation, feedback aggregation, and 3-image concurrent generation — the full Phase 1 pipeline completes in under a minute.

  • A UI that makes AI feel alive. The streaming persona feed — each card sliding in as an agent completes — turns what would be a loading spinner into a live audience panel. Watching 10 AI personas react to your ad in real time feels like market research happening in front of you.

  • Image variation that actually preserves brand identity. Getting Nova Canvas to produce meaningful improvements to an existing ad — without losing the original's look and feel — took careful prompt engineering and similarity tuning.

  • A genuinely closed loop — the first of its kind. The winner of each round becomes the input for the next. The system doesn't just produce feedback — it acts on it, regenerates, and re-evaluates. Evaluate → Redesign → Re-evaluate, across 3 rounds — with the user as the director at every decision point. Existing tools handle fragments of this pipeline; AdOptimizer is the first to run the entire cycle end-to-end.


What we learned

Persona specificity is everything. Generic personas produce noise. The more grounded a persona is — their job, daily habits, specific interests — the more differentiated and actionable their CTR reasoning becomes. Vague personas all say the same thing.

Image variation is the right tool for iterative creative work. Full generation is powerful but destructive. For brand-consistent ad optimization, image variation strikes the right balance: it improves what needs improving and leaves everything else alone.

Streaming changes the experience fundamentally. The same data delivered as a batch result feels flat. Delivered live, persona by persona, it creates genuine engagement — users actually read each evaluation because it arrives like a message, not a report.


What's next for AdOptimizer

  • Full-funnel metrics — Extend beyond CTR to CVR and ROAS optimization, giving each persona a richer decision model
  • Video creative support — Apply the same persona simulation loop to short-form video ads, evaluating frame-by-frame engagement likelihood
  • Reusable persona panels — Let advertisers save, validate, and reuse persona sets across campaigns, building institutional knowledge about their audience over time
  • Competitive benchmarking — Show how your optimized creative compares against industry-average CTR for your category and audience segment
  • Real ad platform validation — Optional connection to Google Ads / Meta Ads APIs to validate AI persona predictions against real campaign performance

Built With

Share this project:

Updates