Etavrian
keyboard_arrow_right Created with Sketch.
News
keyboard_arrow_right Created with Sketch.

Independent AI Max Study Exposes Hidden Google Ads Tradeoffs Most Marketers Are Missing

Reviewed:
Andrii Daniv
11
min read
Mar 6, 2026
Minimalist white illustration AI powered control panel analytics cards shield person pointing at toggle

Independent analysis of Google Ads AI Max for Search is beginning to surface. A new dataset from SMEC (Smarter Ecommerce), summarized by Search Engine Journal, reviews more than 250 Search campaigns and provides rare, quantified performance data on how AI Max behaves in real accounts. This report distills that evidence and highlights where outcomes diverge from Google's own claims, with a focus on what matters for marketers managing budget, structure, and risk.

What SMEC’s Data Reveals About AI Max Performance
SMEC's dataset offers early performance benchmarks for Google Ads AI Max Search campaigns.

AI Max performance benchmarks from SMEC's Google Ads study

Executive snapshot

  • Median conversion value change with AI Max: +13% vs. baseline Search campaigns using standard automation [S1].
  • Median cost per acquisition (CPA) change: +16% higher than baseline, indicating weaker efficiency even as revenue rises [S1].
  • Query expansion mix under AI Max: 80.11% of impressions from Exact Match roots, 19.52% from Phrase Match, 0.38% from Broad Match [S1].
  • Return on ad spend (ROAS) volatility: campaign outcomes ranged from 42% above to 35% below baseline ROAS; only 22% of campaigns stayed close to original ROAS levels [S1].
  • Campaign overlap: roughly 1 in 6 advertisers used AI Max with Dynamic Search Ads (DSA), 1 in 4 with Performance Max, and almost 50% had all three live simultaneously [S1].

Single-line implication for marketers: AI Max typically acts as a revenue growth layer that lifts conversion value but often raises CPA and produces highly uneven ROAS, so it requires tight structural controls and clear performance thresholds.

Method and source notes for the AI Max Google Ads study

The data used here comes from an analysis by Mike Ryan, Head of Ecommerce Insights at SMEC, as reported in detail by Search Engine Journal [S1]. Ryan examined Google Ads Search campaigns that adopted AI Max and compared their performance with existing setups to identify the first patterns in how the product performs at scale.

Key known parameters from the article:

  • Scope: More than 250 Google Ads Search campaigns running AI Max [S1].
  • Platform: Google Ads Search, with AI Max layered on top of existing keyword structures and Smart Bidding.
  • Data volume: At least 1 million AI Max impressions for the keyword-match analysis [S1].
  • Verticals: The article emphasizes online retail use cases, but specific industries, geographies, and account sizes are not detailed [S1].
  • Campaign context: Many campaigns also ran DSA and Performance Max at the same time, which affects traffic allocation and learning [S1].

Methodological details not specified in the supplied text:

  • Exact date range of the analysis.
  • Whether results are based on strict A/B tests, pre/post comparisons, or observational comparisons across time.
  • How median effects were calculated (for example, per-campaign weighting vs. spend-weighted).
  • Any controls for seasonality, promotions, or macro-level demand shifts.

Limitations

Because the Search Engine Journal article is a summary rather than a full methods paper, there is incomplete visibility into experiment design and controls [S1]. The metrics should be treated as directional performance indicators rather than precise causal estimates across all advertisers.

Source IDs

  • [S1] Brooke Osmundson, "What SMEC's Data Reveals About AI Max Performance," Search Engine Journal, summarizing SMEC's internal analysis of more than 250 AI Max Search campaigns and quoting related Google statements.
  • [S2] SMEC's original AI Max analysis by Mike Ryan, which provides the full analysis and is referenced but not reproduced in full in [S1].

AI Max query expansion and keyword match behaviour

SMEC's dataset helps clarify how AI Max actually expands coverage relative to traditional keyword match types.

Observed query mix under AI Max [S1]:

  • Exact Match roots: 80.11% of AI Max impressions were attached to Exact Match keywords.
  • Phrase Match roots: 19.52% of impressions.
  • Broad Match roots: 0.38% of impressions.

In practice, AI Max behaves far less like a pure Broad Match expansion tool than many advertisers assume. It most often starts from Exact Match seeds and then extends to additional queries that Google's systems deem closely related or relevant based on intent signals [S1].

The article notes that AI Max frequently takes tightly defined keywords and broadens the set of queries considered valid matches, consistent with Google's shift toward intent-based matching [S1]. However, this process is largely opaque, and the study underscores that without active search term review, campaigns may begin serving for queries that sit outside the advertiser's original keyword strategy [S1].

Observed overlap with legacy Broad Match

SMEC also reported accounts where AI Max queries overlapped heavily with existing Broad Match coverage, including cases with 49% and 63% overlap with Broad Match queries [S1]. A major driver appears to be legacy Broad Match Modified (BMM) terms that were auto-converted to Broad Match but still behave closer to Phrase Match in practice [S1]. AI Max then extends those matches further, creating perceived duplication.

AI Max impact on conversion value, CPA, and ROAS

Conversion value and cost efficiency

Across the 250+ campaigns studied:

  • Median conversion value uplift from AI Max: +13% relative to prior Search performance [S1].
  • Median CPA change: +16%, indicating that incremental conversions generally came at higher cost per action [S1].

Google's own communications for AI Max highlight an expected ~14% uplift in conversions or conversion value at similar efficiency levels for non-retail advertisers [S1]. SMEC's observed median conversion value gain of 13% is close to that claim, but the 16% CPA increase indicates that efficiency did not remain flat in this dataset [S1].

The article cites commentary from Google Ads Liaison Ginny Marvin that incremental volume typically follows a law of diminishing returns - once high-intent queries are fully covered, extra conversions tend to originate from less efficient or less predictable queries [S1]. That framing is consistent with SMEC's observed pattern of higher CPAs for incremental conversions.

ROAS volatility across accounts

While median effects appear roughly neutral on ROAS, the spread across individual campaigns is large:

  • Some campaigns saw ROAS 42% above their baseline [S1].
  • Others saw ROAS 35% below baseline [S1].
  • Only 22% of campaigns landed near their original ROAS targets [S1].
  • The remaining 78% either overperformed or underperformed substantially [S1].

This dispersion signals that AI Max outcomes are highly dependent on existing account structure, keyword coverage, conversion tracking quality, and broader configuration choices.

Campaign overlap between AI Max, DSA, and Performance Max

SMEC's analysis highlights how AI Max is being deployed inside actual Google Ads accounts, and how that setup affects signal quality and reporting clarity.

Observed campaign combinations [S1]:

  • Roughly 1 in 6 advertisers used AI Max together with Dynamic Search Ads.
  • Roughly 1 in 4 used AI Max and Performance Max at the same time.
  • Nearly 50% of accounts ran AI Max, DSA, and Performance Max concurrently.

All three campaign types are designed to extend reach beyond explicit keyword lists using automation and broad query matching [S1]. When they run in parallel without clear boundaries, they can:

  • Compete for the same or similar queries.
  • Split conversion data across multiple automated systems.
  • Obscure which campaign types are truly driving incremental conversions vs. cannibalizing each other.

Google's official position, as reported in the article, is that advertisers should focus on business outcomes and let ad rank mechanics determine which campaign wins individual auctions [S1]. In practice, SMEC's data suggests this approach often decreases visibility into where performance is coming from and can make Smart Bidding learning patterns harder to interpret [S1].

This overlap is especially relevant for online retailers that already rely heavily on Performance Max for Shopping and remarketing; adding AI Max without rethinking structure risks further fragmentation of demand capture.

Interpretation and implications for AI Max strategy

This section is interpretation based on the data above; it mixes Likely and Tentative conclusions. All factual numbers refer back to [S1].

1. AI Max behaves as a volume expansion layer, not an efficiency engine (Likely)

The combination of +13% median conversion value and +16% median CPA strongly indicates that AI Max primarily adds incremental volume at weaker unit economics, instead of improving the efficiency of existing traffic [S1]. For marketers, this suggests AI Max is closer to a growth lever than a cost-saving tool. Budget allocations should be judged on incremental profit and payback, not on blended CPA alone.

2. ROAS outcomes are highly account-specific (Likely)

The wide ROAS spread (from -35% to +42% vs. baseline, with only 22% near target) points to strong dependence on account hygiene, conversion tracking quality, and existing keyword coverage [S1]. Well-structured accounts with clean data and defined value rules are more likely to supply the signals AI Max needs to find profitable expansion.

3. Exact Match remains the primary control surface (Likely)

With 80.11% of AI Max impressions connected to Exact Match roots, existing Exact Match portfolios still matter greatly [S1]. Advertisers who maintain curated Exact lists effectively set the starting point for AI Max expansion. This favors accounts that continue to audit and refine Exact keyword sets rather than defaulting to Broad Match everywhere.

4. Query monitoring and negative keyword management become more important, not less (Likely)

Because AI Max extends from Exact and Phrase roots into less transparent queries, active search term review remains necessary. Without this, accounts risk incremental volume coming from low-value or off-strategy searches. Applying negatives and category controls is a practical way to stabilize CPA and ROAS while allowing controlled growth.

5. Overlapping auto-expansion campaigns need clearer boundaries (Likely)

Running AI Max, DSA, and Performance Max simultaneously - as roughly half the accounts did - spreads signal across multiple automated systems that chase similar intent [S1]. Practical mitigations include:

  • Assigning distinct roles (for example, Performance Max for Shopping and remarketing, AI Max only for specific high-value Search categories).
  • Adjusting budgets and priorities so one expansion layer is primary for prospecting traffic.
  • Auditing search term overlap and consolidating where duplication is high.

6. Legacy Broad Match structures can distort AI Max results (Tentative)

The reported 49-63% query overlap between AI Max and Broad Match in some accounts, driven by old BMM terms, suggests that historical keyword configurations may cause AI Max to appear less incremental than it truly is [S1]. Cleaning up old match types and clarifying intent categories should make it easier to read AI Max performance.

7. Profit-based evaluation is safer than pure CPA targets (Likely)

Given higher median CPA and volatile ROAS, using contribution margin or gross profit per conversion as the yardstick for AI Max is safer than focusing strictly on CPA. This aligns with the reality that incremental conversions often cost more but can still be attractive if order values or lifetime value justify the spend.

Contradictions, gaps, and data limitations on AI Max

1. Difference between Google's efficiency claim and observed CPA

Google's communication, as relayed in [S1], suggests around 14% more conversions or conversion value at similar efficiency. SMEC's dataset shows a near match on value uplift (13%) but a 16% rise in CPA [S1]. This tension may reflect:

  • Differences between the advertiser mix in Google's internal testing and SMEC's client base (which appears skewed to online retail).
  • Different methodologies (for example, controlled split tests vs. more organic account-level rollouts).
  • Possible unreported factors such as seasonality or concurrent bid strategy changes.

Without Google's full test design and SMEC's complete methods, the gap remains unresolved.

2. Unclear generalizability beyond SMEC's client set

The campaigns in SMEC's study are drawn from its own customer base, which likely features specific budgets, verticals, and maturity levels. Performance for small advertisers, non-retail B2B, or lead-generation-heavy accounts may differ materially, but the article does not provide breakdowns by industry or size [S1].

3. Limited transparency on attribution and measurement

The summary does not specify whether all accounts used data-driven attribution, how offline conversions were handled, or whether value rules were applied. Changes in attribution setting during the test window could significantly affect observed ROAS and CPA without any true change in user behaviour.

4. Lack of time-series context

The report does not provide details on ramp-up time for AI Max, or whether results improve, deteriorate, or stabilize over several weeks. Given the learning-period dynamics of Smart Bidding and new campaign types, this is a material gap for planning.

5. No quantified view of search term quality

The study reports expansion patterns (for example, the dominance of Exact Match roots) and overlap with Broad Match, but it does not quantify whether new queries are higher or lower intent, or how much of the added volume comes from branded vs. non-branded terms. That data would materially change how marketers interpret the +13% conversion value uplift.

Data appendix: key AI Max metrics from SMEC dataset

Table 1: High-level performance shifts under AI Max [S1]

Metric Observed change vs. baseline
Median conversion value +13%
Median CPA +16%
ROAS range across campaigns -35% to +42%
Campaigns near original ROAS target 22%
Campaigns with large ROAS deviation 78%

Table 2: AI Max query expansion by match type (impressions) [S1]

Root match type Share of AI Max impressions
Exact Match 80.11%
Phrase Match 19.52%
Broad Match 0.38%

Table 3: Campaign type overlap patterns [S1]

Campaign setup Approximate share of advertisers
AI Max + Dynamic Search Ads 1 in 6
AI Max + Performance Max 1 in 4
AI Max + DSA + Performance Max (all three live) Nearly 50%

These figures offer a concise reference when calibrating expectations for AI Max tests and assessing whether results in your own account fall within, above, or below the performance range identified in SMEC's analysis.

Quickly summarize and get insighs with: 
Author
Etavrian AI
Etavrian AI is developed by Andrii Daniv to produce and optimize content for etavrian.com website.
Reviewed
Andrew Daniv, Andrii Daniv
Andrii Daniv
Andrii Daniv is the founder and owner of Etavrian, a performance-driven agency specializing in PPC and SEO services for B2B and e‑commerce businesses.
Quickly summarize and get insighs with: 
Table of contents