If you tried the example in the first post in this series, you'd have seen the insight that you get is excellent. At least to get directional feedback that gives you a different perspective. The problem is that in real life you're not building an email to target a single person, you're often creating content to resonate across an entire buyer group that consists of multiple personas. This adds a few variables to the challenge and makes it harder to predict the outcome of your campaign.

In this post you will introduce multiple personas to your AI as a synthetic panel which represents the diversity of your audience. Your content will be evaluated by each persona in your panel independently, and the patterns that emerge across them are far more useful than any single reaction.

The building blocks for the synthetic panels are the same as the previous section: persona + content + question . The difference is how you input this and make it scalable for your workflow.

Personas

The value of a panel comes from deliberate variation across the attributes that matter for your content. For SecureHorizon (our fictional company from the previous post) their webinar email, a useful panel might vary across:

  • Role and seniority β€” IT Ops Manager, Security Analyst, CIO, Sysadmin, Procurement Lead
  • Company size and type β€” mid-market fintech, enterprise government agency, 50-person startup
  • Geography β€” Melbourne, Singapore, London
  • Technical depth β€” hands-on-keyboard vs. budget-holder who hasn't touched a terminal in years
  • Buying stage β€” never heard of SecureHorizon vs. already evaluated a competitor vs. existing customer

I let AI create the prompt for this excercise which was inspired in the first post in this series.

Prompt: Let AI Create Your Panel

I'm building a synthetic panel to test B2B marketing content (specifically email campaigns) for an IT security/resilience vendor targeting the ANZ mid-market. I need you to create 10 detailed personas that represent the diversity of this audience.

Panel design requirements:

  • Roles and seniority: Cover the full spectrum of people who would receive a B2B IT vendor email β€” from hands-on practitioners (sysadmins, engineers, SOC analysts) through mid-level managers (IT Ops, Service Delivery) to senior decision-makers (CISO, CIO). Include at least one person who influences purchasing decisions but doesn't make them.
  • Company size and type: Range from small startups (~60 employees) through mid-market (200-2,000) to large enterprises and government (3,000-5,000+). Mix industries: financial services, logistics, healthcare, government, SaaS, e-commerce, insurance, professional services, fintech, managed services.
  • Geography: Cover all major Australian states (VIC, NSW, QLD, WA, SA, ACT, TAS) plus New Zealand (Auckland and Wellington). Every persona should be in a different city.
  • Demographics: Ages from late 20s to early 50s. Balanced gender split (5/5). Income should be realistic for each role and market. Education from self-taught/TAFE through to Master's/MBA. Include relevant industry certifications where appropriate (CISSP, ITIL, AWS, CompTIA, ISO 27001, etc.).
  • Risk tolerance: Vary from very low (can't afford outages or untested tools) to high (will trial something today if the docs look good).

Persona structure β€” use this exact template for each:

# [Full Name]

## Demographics
- Age:
- Role: [title] at [company type] ([size] employees)
- Location: [City, State/Region, Country]
- Income: [in local currency]
- Education: [degree/qualification (institution)], [certifications if relevant]
- Years in role:

## Psychographics
- Motivated by:
- Frustrated by:
- Decision style:
- Risk tolerance:
- Values:

## Content & Email Behavior
[5 bullet points covering: daily email volume, what makes them open a vendor email, content format preferences, social media/community habits, and one distinctive behavioral detail]

## Brand Relationship
[3 bullet points covering: how they evaluate vendors, trust signals or biases, and what they're currently working on or evaluating]

Important guidelines:

  • Each persona should feel like a real person, not a demographic checkbox. Give them specific tools they use, specific frustrations, specific habits.
  • The psychographics should directly inform how they'd react to marketing content β€” their skepticism level, what language triggers them, what makes them click vs. delete.
  • The "Content & Email Behavior" section is critical β€” this is what drives the evaluation. Be specific about what makes each person open or ignore a vendor email.
  • The "Brand Relationship" section should include a current project or priority that creates context for how they'd evaluate new vendor content.
  • Include one persona carried forward from a previous exercise: Rachel, originally an IT Operations Manager in Melbourne. Evolve her profile to fit this more detailed schema.
  • Names should reflect the cultural diversity of ANZ (Anglo-Australian, Asian-Australian, Māori, European heritage, etc.).
  • Save each persona as a separate numbered markdown file (01_rachel.md, 02_brendan.md, etc.).

Question or (Evaluation Criteria)

The process is the same as your single-persona test, just repeated. Each persona gets the same content, the same questions, and responds independently. The AI will then look at the individual responses and then aggregate the feedback to make it useful for you. In the following example, you will see a generic prompt to evaluate the email through 5 Stages.

Prompt: Creating the evaluation criteria

Use this framework when testing email content against the persona panel.

Stage 1: Inbox Decision (Subject Line + Preheader + Sender)

  1. Would you open this email? (Yes / No / Maybe)
    • Confidence: (1-10)
  2. Why or why not? (1-2 sentences)
  3. Does the sender name feel trustworthy? (Yes / No β€” why?)
  4. Does this feel like: (Relevant content / Generic marketing / Spam)

Stage 2: Body Reaction (if opened)

  1. First impression after reading the body (1 sentence)
  2. Is this relevant to your current work or priorities? (Yes / Somewhat / No β€” explain)
  3. Is the message clear? (Yes / No β€” what's confusing?)
  4. How does it make you feel? (e.g., curious, skeptical, annoyed, interested, indifferent)
  5. Do you trust the claims being made? (1-10, with brief reason)

Stage 3: CTA Response

  1. Would you click the CTA? (Yes / No / Maybe)
    • Confidence: (1-10)
  2. What do you expect happens after clicking? (1 sentence)
  3. What is your main hesitation or objection? (1 sentence)
  4. What would make you more likely to click? (1 sentence)

Stage 4: Overall

  1. Would you unsubscribe after this email? (Yes / No)
  2. Would you forward this to a colleague? (Yes / No β€” who and why?)
  3. One thing that works well: (1 sentence)
  4. One thing to improve: (1 sentence)

Stage 5: Aggregated Recommendations

This stage runs ONCE after all personas have completed Stages 1-4. It synthesises patterns across the full panel into a single set of actionable edits.

Ground rules:

  • All recommendations must serve the campaign goal stated in the content's metadata. Every suggested edit should move the content closer to achieving that goal. Do not optimise for secondary objectives.
  • Prioritise and weight feedback from the target segment defined in the content's metadata. Personas that match the target segment carry more influence than those outside it. If a persona falls outside the intended audience, their feedback is noted but should not drive primary recommendations.
  • Only reference copy, elements, and language that exist in the supplied content. Do not introduce concepts, frameworks, or terminology from persona backgrounds.
  • Recommendations must be edits to what was provided β€” rewrites, cuts, additions, or restructures of the actual content.
  • Cite which personas (by name or count) support each recommendation.
  1. Subject line rewrite: Based on panel-wide open/skip patterns, suggest 1-2 alternative subject lines. Explain what was weak in the original and what the rewrites fix. Cite how many personas flagged the issue.
  2. Preheader rewrite: Suggest an alternative preheader that complements (not duplicates) the subject line. Reference specific panel feedback on why the original didn't work.
  3. Body edits: Identify the 1-2 weakest sentences or paragraphs in the body (the ones most personas flagged). Provide a rewrite for each. Explain what was wrong and what the new version does better.
  4. CTA rewrite: Suggest an alternative CTA (button text + surrounding copy) that addresses the most common objection across the panel. State what that objection was and how many personas raised it.
  5. Elements to remove: What could be cut entirely without losing impact, based on panel consensus? (Or "Nothing β€” the panel found it tight.")
  6. Elements to add: What is missing from the supplied content that multiple personas said would increase their likelihood to engage? Only suggest additions that are directly relevant to what's already in the copy.
  7. Structural or formatting issues: Flag any layout, repetition, or missing-information problems that multiple personas noted (e.g., missing time/date, duplicate content, unclear audience).
  8. Segment-specific notes: If the panel reveals that different audience segments need fundamentally different messaging, note which segments diverged and what each group needs β€” but frame recommendations as variants of the existing copy, not net-new content.

The Results

With the content, the personas and the evaluation criteria, all you have to do is prompt your AI to run it and your feedback is ready! Here I attach the output for SecureHorizon. I'd recommend skipping right to the Stage 5 where the meat of the feedback is.

Extra Ideas to Enrich Your Panel's Output

  • Make each persona reason step by step through the content. "I read the subject line and felt... then I scanned the body and noticed... my hesitation is..."
  • Track how a persona's engagement shifts section by section, line by line. Where does interest peak? Where does it drop?
  • Give personas situational context before they read. "You just had a security incident last week" or "Your budget was just cut by 20%." Same content, different mindset, different result.
  • Run your content AND your competitor's content through the same panel. Example: Your webinar invite vs. a competitor's. 7/10 prefer their subject line, 6/10 prefer your speaker lineup

Conclusion

Everything above treats each persona as isolated (they read the content alone, respond alone, and never see what the other personas said). That's powerful for structured evaluation, but it misses something real audiences do all the time: they talk to each other.

Rachel forwards the email to her sysadmin. The CISO mentions it in a leadership meeting. A procurement lead asks the security team whether the vendor is legit before registering. These interactions change outcomes in ways that isolated evaluations can't predict.

Multi-agent patterns simulate exactly that: personas that respond not just to your content, but to each other.