Run Meta Lookalikes with Exclusions (without resetting learning)

Use Primer to build high-quality seed and exclusion audiences from your CRM, sync them to Meta, and A/B test lookalike-only vs. lookalike+exclusions—without disrupting your winning campaigns.

What you’ll do

  1. Create a Champion (Best Customers) seed in Primer

  2. Create Anti-Persona exclusion lists in Primer

  3. Sync both to Meta

  4. Set up a clean A/B test in Meta Experiments (no reset to your main campaign’s learning)

  5. Measure lift on lead quality and acquisition efficiency

For additional details and real campaign outcomes, check out Primer's Meta Lookalikes Use Case


1) Build your audiences in Primer

Champion (Best Customers) seed

  1. In Primer, create a Best Customers / Champion audience from Closed/Won accounts or high-quality leads.

  2. Use Historical Win Rates to surface firmographic patterns (industry, employee size, seniority/title, etc.).

  3. (Optional) Narrow by titles that most often convert (e.g., Director+ in RevOps/IT Security).

  4. Sync → Meta (destination: Facebook/Instagram).

Anti-Persona exclusions

  1. Create an Anti-Persona audience for companies/contacts you don’t want (e.g., student emails, <50 employees, irrelevant titles).

  2. Add rule-based filters that reflect “non-ICP” traits (e.g., EDU domains, job seekers, freelancers, hobbyists).

  3. Sync → Meta as exclusion lists.


2) Configure your base ad set in Meta

  1. In Meta Ads Manager, create or duplicate an ad set that targets a 1% lookalike built from your Champion seed.

  2. Disable Advantage+ Audience Expansion to maintain a clean test boundary.

  3. Leave exclusions off for now—this version will be your “lookalike-only” control.

Tip: Keep creative, placements, optimization event, and budget identical across variants. The only change you’ll test is exclusions.


3) Test without resetting learning: Meta Experiments (A/B Test)

This approach does not reset learning on your scaled, already-performing setup.

A. Create your two variants

  • Variant A (Control): Your existing 1% lookalike ad set (no exclusions).

  • Variant B (Test): Duplicate that ad set and add your Anti-Persona exclusions. Everything else stays identical.

B. Launch the A/B Test in Experiments

  1. In Ads Manager, go to All Tools → Analyze & Report → Experiments and choose A/B Test.

  2. Select your two ad sets (A = lookalike-only; B = lookalike+exclusions).

  3. Set Schedule (test window), Budget split (e.g., 50/50), and Primary KPI (e.g., Qualified Signup / SQL rate / CPA).

  4. Name the test (e.g., LLA 1% w/ vs. w/o Exclusions), Review, and Create Test.

What Experiments does during the test

  • Evenly splits delivery between variants and prevents audience overlap during the test window.

  • Your original optimization history stays with the control; the test duplicate learns independently.

  • After the test, delivery returns to normal and you can pause the loser.


4) Alternative: Test inside a CBO campaign (with caution)

If you’re using Campaign Budget Optimization (CBO):

  1. In the same campaign, add a new ad set (duplicate the control) and apply exclusions to the new ad set only.

  2. Let CBO allocate budget between ad sets.

Caveat: The new ad set will enter learning. Your existing ad set generally won’t reset, but budget shifts can create ripple effects. Monitor closely.


5) Analyze and decide

In Experiments → Results (or Ads Manager reporting), compare:

  • Lead quality: SQL rate, Opportunity rate, Qualified submission rate

  • Efficiency: CPA/CPL, CPM, CTR, CVR

  • Down-funnel: Pipeline $$ per 1,000 impressions, win rate by cohort

  • Operational: Match rate to CRM, disqualify rate (junk), form-completion rate

Decision rule of thumb

  • If exclusions improve SQL rate and CPA without killing volume, promote the exclusion variant.

  • If volume drops too hard or CPA rises, keep lookalike-only and revisit your Anti-Persona logic (tighten only the most wasteful segments).


Troubleshooting & tips

  • No reset to your main performer: Use Experiments so your winning ad set keeps its learning and stability; the duplicate is the one that learns.

  • Keep variables isolated: Same creative, placements, optimization event, bid strategy, and budget split. Only change exclusions.

  • Disable Advantage+ Audience Expansion: Maintain clean boundaries for the test.

  • Test length: Aim for 7–14 days or until you reach statistically meaningful conversions (e.g., 100+ qualified leads across both arms, if feasible).

  • Document seeds & exclusions: Note seed composition (e.g., top 500 Closed/Won, last 12–24 months) and exclusion logic so results are reproducible.


FAQ

Will exclusions reset the learning phase?

  • If you edit your live scaled ad set, significant targeting changes (like adding exclusions) can trigger a reset.

  • Using Experiments with a duplicate protects your main ad set; only the test variant learns.

Can I start the test with just one ad set?

  • You need two variants to compare. If you only have one live ad set, duplicate it and add exclusions to the copy for the test.

What % lookalike should I use?

  • Start with 1% for signal purity. If you need scale, test 2–5% in follow-up experiments after you’ve proven the exclusion lift.

What should I monitor first?

  • SQL rate and CPA (or cost per qualified signup). A better exclusion strategy should reduce junk leads and stabilize CPA.

Last updated