Etavrian
keyboard_arrow_right Created with Sketch.
Blog
keyboard_arrow_right Created with Sketch.

The 60-Day Playbook for B2B Performance Max

11
min read
Oct 24, 2025
Minimalist illustration feeding CRM signals into funnel with value toggle producing qualified SQLs Week4 Week6

Why I use Performance Max for B2B pipelines

I care about pipeline health, not vanity clicks. I want qualified conversations, stable cost per lead, and proof that ad dollars are moving deals forward while I stay focused on revenue. That is where Performance Max inside Google Ads can pull its weight. It is not magical, and it is not a shortcut. But when I feed it the right data and set a few guardrails, it becomes a dependable source of intent, not just traffic.

For a CEO tracking revenue, the pitch is simple: Performance Max can grow qualified lead volume while holding or improving cost per qualified lead. In my experience across B2B services and in Google’s public guidance, I typically see early learning in the first two weeks, stabilization by week four, and more predictable cost per action by week six. That early phase matters. It is when the system tests placements and creative at scale. I expect some wobble, then I get sharper.

Useful planning ranges I have seen:

  • Early lift: 10 to 25 percent more total conversions on the same spend once the campaign steadies
  • Cost per qualified lead: flat to down 10 percent by day 45 when value-based bidding is active
  • Time to learning complete: four to six weeks for accounts with roughly 50 or more conversions per month at the optimization event (aligned with Google’s automation guidance)

Accountability checkpoints I run:

  • Day 7: asset coverage review, brand exclusions verified, initial search term themes checked
  • Day 14: path-to-value read using lead-to-SQL rate, early query theme triage, negative audiences added
  • Day 30: budget reallocation by asset group, value rules refined, SQL/opportunity import health check
  • Day 60: cohort analysis by channel, incremental lift readout, cost per opportunity trendline

One note that surprises many teams: Performance Max often looks worse than Search-only in the first ten days, then surpasses it by week four. The reason is lag. B2B sales cycles push value downstream, and Google’s automation needs that downstream data to get smart.

What Performance Max actually does

Performance Max is a single campaign type that uses automation to place ads across Google’s inventory and to manage bids and creative in real time. I upload assets, feed the system my conversion data, and it figures out where to show ads to hit my goals. It complements Search; it does not replace it. Keyword-based Search plus Performance Max typically perform better together than either alone.

Performance Max reaches Search, YouTube, Display, Discover, Gmail, and Maps. That reach is a strength and a risk. Without brand controls, some traffic cannibalizes branded Search. I protect exact-match brand terms with campaign-level brand exclusions in Performance Max and account-level negative keywords for brand plus “jobs,” “careers,” and “support” queries. I use Search Term Insights inside the Performance Max campaign to see aggregated themes, then refine with negatives or brand limits. I also accept that placement-level detail is thin. Instead, I monitor by asset group, audience signals, the combinations report, and - most importantly - CRM stages to see which asset group actually drives SQLs and opportunities, not just clicks. Where I need stronger brand safety, I lean on account-level content suitability settings to curb low-value contexts.

Lead quality comes from the data I send it

Lead quality is where Performance Max either shines or stumbles. I make it shine by teaching it what “good” looks like with CRM truth, not just form submissions.

What I implement:

  • Push offline conversions back into Google Ads. I capture GCLID and, where applicable, GBRAID/WBRAID on forms, match to contacts in my CRM, then import stages like SQL and opportunity using Google’s offline conversion import. This gives bidding the signals that correlate with revenue.
  • Turn on Enhanced Conversions for leads. I hash and pass first-party fields (email, phone, name) so Google can improve matching when cookies fail. This is a documented best practice for improving lead-gen measurement.
  • Map stages to values. Ebook downloads are not demo requests. I assign values that reflect revenue potential (for example: ebook 5 dollars, webinar 20, demo 200, SQL 600, opportunity 1,500). The exact numbers vary by business; the structure matters.
  • Bid to value, not volume. With values in place, I use tROAS or maximize conversion value so the system favors high-intent inquiries over cheap form fills.

The exclusions matter as much as the signals:

  • Exclude current customers via Customer Match so I do not pay for clicks I can reach through lifecycle channels
  • Exclude careers traffic by building an audience around careers pages and excluding it at the campaign level
  • Exclude low-value geos or industries and focus location targeting on the ICP I can serve
  • Reduce low-intent contexts using account-level content suitability controls; I cannot manage placement-by-placement in Performance Max, so I set brand safety categories conservatively
  • Block job-seeker and vendor noise with custom segments built from job board themes, competitor keywords, and vendor domains

When I do this well, junk leads drop, and cost per SQL starts to mean something. I also make sure my consent and privacy practices are in place before sending any first-party data.

Smart Bidding guardrails that matter

Smart Bidding powers Performance Max. I pick an objective and a bid strategy; the system adjusts bids and placements to hit that goal. In B2B, two strategies dominate:

  • Target CPA for stable cost per action when I lack reliable conversion values
  • Target ROAS for conversion value once I have values set for high-intent actions, SQLs, and opportunities

What keeps these strategies honest:

  • Data volume: I aim for 30 to 50+ conversions per month at the optimization level (a threshold Google commonly cites for automation). If I only have 20 SQLs a month, I start by optimizing to high-intent forms with strong value weights, then graduate to SQL when volume supports it.
  • Conversion lag: If my average lag from click to SQL is 10 days, I give the system time before judging performance or shifting targets.
  • Seasonality adjustments: When I expect short, sharp spikes (a quarterly event), I use seasonality adjustments so Smart Bidding anticipates higher conversion rates. I remove the adjustment as soon as the spike ends.
  • Micro-to-macro mapping: I track meaningful micro steps (calendar views, pricing views, proposal requests) with small values, and reserve big values for SQLs/opportunities. This keeps learning alive when big events are sparse.
  • Conversion value rules: I raise values for high-LTV segments (for example, enterprise domains or high-close-rate regions) and reduce values for segments that rarely buy. That steers bidding toward revenue, not just count.

The guardrails I control:

  • Reasonable targets: I set tCPA or tROAS based on trailing data and tighten gradually. Swinging targets week to week confuses the model.
  • Budget pacing: I keep daily budgets steady. Large swings force relearning.
  • Clean data: I deduplicate conversions across Google Ads and analytics, ensure Enhanced Conversions are implemented correctly, and use data-driven attribution where eligible so value is credited realistically across channels.

Learn more about Smart Bidding setup and best practices.

Audience signals and exclusions I rely on

Audience signals tell Performance Max where to start. They do not box it in, but they tilt the table in my favor.

Signals that consistently help:

  • Customer Match built from closed-won deals and high-value accounts
  • Remarketing lists split by ICP pages and high-intent templates (pricing, demo, case studies)
  • Custom segments using competitor keywords, relevant URLs (analyst reports, review sites), and problem statements that buying committees search
  • Creative that speaks to job functions and seniority; I let signals steer rather than using strict filters the system will ignore

Exclusions I keep current:

  • Current clients via Customer Match
  • Careers page visitors
  • Low-value geos or industries my sales team does not serve
  • Devices or time windows that never convert

I also rotate fresh creative for key segments every four to six weeks. That keeps engagement up and gives the system new combinations to test.

A simple, durable account structure

Structure and creative do a lot of the heavy lifting. I keep it simple, but intentional.

How I structure:

  • Campaigns by service line or ICP when budget allows, so budget control and reporting roll up neatly to revenue
  • If budget is tight, one campaign with two to four asset groups, each mapped to a distinct offer or persona

How I build asset groups:

  • One core offer per asset group (for example, technical compliance audit for CTOs, RevOps assessment for COOs)
  • Creative sets for decision makers with plain language about outcomes, risks, and proof points; I include short video, square images, and clear headlines tied to the value proposition
  • Final URL expansion with a ruleset: I allow the system to test related pages but exclude low-intent pages (like blog posts) and keep traffic on high-intent templates (demo, pricing, case studies)

How I protect Search:

  • I add brand exclusions so Performance Max does not soak up brand traffic; exact-match Search handles that
  • I maintain account-level negatives for support, jobs, and student queries tied to my brand

How I size budgets and test incrementality:

  • I fund Performance Max enough to learn; a practical starting point is 30 to 50 percent of my non-brand Search budget when adding it for the first time
  • I run incremental tests using regional splits or campaign-level holdouts for 30 days to measure lift in SQLs and opportunities
  • I reinvest based on SQL and opportunity cost, not just form fill count

With this structure in place, I get flexibility without chaos. I can add an asset group for a new offer, test a fresh ICP, and see if the downstream numbers move.

Conversion tracking that mirrors revenue reality

My conversion setup is the backbone of everything above. I keep it clean and realistic.

Implementation basics I rely on:

  • Install Enhanced Conversions for leads on forms and pass hashed email, phone, and name as documented by Google Ads
  • Capture GCLID and, where applicable, GBRAID/WBRAID in hidden fields on every form; store them in the CRM for matching
  • Set the primary conversion at the SQL or opportunity stage once volume allows; until then, use a high-intent lead as primary and mark lower-intent actions as secondary
  • Choose conversion windows that reflect reality (in B2B, a 60–90 day click window and a shorter view-through window often fit)
  • Deduplicate across Google Ads and analytics so only one platform records the conversion I bid to
  • Calibrate values quarterly; if ACV or close rates shift, I update values so bidding stays aligned with revenue

Making ROI tangible

Here is an illustration. Suppose Performance Max drives 400 leads in 60 days. Lead-to-SQL rate is 25 percent; SQL-to-closed-won is 20 percent. Average contract value is 30,000 dollars with 65 percent gross margin. Media spend is 60,000 dollars.

  • Leads to SQLs: 400 × 0.25 = 100 SQLs
  • SQLs to deals: 100 × 0.20 = 20 deals
  • Gross profit: 20 × 30,000 × 0.65 = 390,000 dollars
  • Media payback: 390,000 ÷ 60,000 = 6.5× payback on media

People costs still apply, but the math shows why value-based bidding is worth the setup. If I reduce lead count but increase SQLs and deals, I win.

Reporting cadence, seasonality, and expectations

A steady cadence builds trust and avoids knee-jerk decisions:

  • Weekly: review SQL volume, cost per SQL, and top search themes from the Insights page; adjust negatives and audience exclusions
  • Biweekly: analyze asset groups with CRM stage data; shift budget to segments generating SQLs and opportunities, not just clicks
  • Monthly: cohort analysis by campaign and geo; reassess values and tROAS/tCPA targets; confirm offline import health with spot checks
  • Quarterly: test a new asset group for a fresh offer or ICP; validate with a holdout and share the lift

Seasonality matters in B2B. If buyers slow down in late December, I lower spend slightly and remove any seasonality adjustments. If the fiscal year ends in June and buyers rush in May, I plan for higher budgets and fresh creative in late April. When I expect a short spike, I use Google’s seasonality adjustment tool briefly and turn it off as soon as the spike passes.

I do not switch everything to Performance Max at once. Keeping high-intent Search alongside Performance Max usually wins. Search captures demand from people who already know what to type. Performance Max finds likely buyers I would miss with keywords alone. Together, they raise the floor and the ceiling.

By month one, my focus is setup, patience, and fixing leaks. By month two, I lean into value and control. By month three, I expect steadier cost per SQL and a clearer impact on pipeline. If the data still looks messy, it is almost always a tracking or targeting issue, not a channel problem. I clean the plumbing, protect my brand, feed CRM truth back in, and let the system work while I work on deals.

Quickly summarize and get insighs with: 
Andrew Daniv, Andrii Daniv
Andrii Daniv
Andrii Daniv is the founder and owner of Etavrian, a performance-driven agency specializing in PPC and SEO services for B2B and e‑commerce businesses.
Quickly summarize and get insighs with: 
Table of contents