Etavrian
keyboard_arrow_right Created with Sketch.
News
keyboard_arrow_right Created with Sketch.

Google AI Search Is Squeezing Expert Sites - And Chunking Is Not The Real Problem

Reviewed:
Andrii Daniv
10
min read
Jan 13, 2026
Minimalist illustration of AI answer card funneling sources stealing search clicks with ad budget gauge

Google AI search results and garbage SERPs

Google's AI-driven answer layer is less about "chunking" content and more about changing the economics of search traffic. Using SEO analyst Roger Montti's critique of "garbage AI SERPs" (he publishes at Martinibuster.com) and Google's own GEO/AI guidance, this analysis looks at how AI search affects expert publishers, query value, and where marketers should reallocate effort.

Thesis: The key question is not whether SEOs should "chunk" content for LLMs, but whether Google's AI search results structurally depress traffic to expert sites while amplifying weaker sources - and what that means for organic and paid strategies.

Google's public line (via Danny Sullivan and John Mueller) is: write for humans, avoid mechanical chunking, and trust that systems improve over time. [S1] Montti's article argues that this misses the real problem: AI long-form answers and query fan-out are throttling referral volume while surfacing questionable pages (Medium posts, LinkedIn articles, retailer blogs) instead of subject-matter experts.

Key takeaways

For marketers, the main implications are:

  • AI answers compress whole topic clusters into one screen: expect fewer total organic clicks per topic, especially for how-to, comparison, and style/advice queries, even if rankings look "okay" in tools.
  • "Write for humans" remains valid, but format now matters at the query-cluster level: pages that map cleanly to 3-5 related questions are more likely to be harvested by AI modules than narrowly optimized single-keyword posts.
  • Authority is not guaranteed to win in AI surfaces: weaker sources can be cited if they rank for sub-queries or match conversational tone. Brands need to build clear topical footprints and brand demand, not just classic E-E-A-T signals.
  • Paid search and Shopping become safety valves: as more informational clicks leak to AI answers, competitive pressure and CPCs are likely to rise on high-intent terms where ads still sit above or beside AI modules.
  • Strategic focus should move from tweaking page-level SEO tricks (like chunking) to portfolio-level planning: diversify traffic sources, invest in direct and brand demand, and treat AI search as a partially zero-click channel when modeling ROI.

Situation snapshot

The current discussion is triggered by:

  • A Search Engine Journal article by Roger Montti summarizing comments from Google's Danny Sullivan and John Mueller on SEO for LLM-based search and "chunking" content. [S1]
  • Sullivan's statement that Google does not want publishers splitting pages into artificial "bite-sized chunks" for LLM consumption and prefers content created for human readers. [S1]
  • Google's historical framing of "next generation search" as moving from string matching to entity understanding (Knowledge Graph, "things not strings"). [S2]
  • Montti's critique that the real issue is not chunking but query fan-out and AI answer modules crowding out expert sources, while Google's AI mode surfaces low-authority examples such as:
    • An abandoned Medium blog with broken images.
    • A LinkedIn article on a business network.
    • A sneaker retailer's blog post on sweatshirts.
  • Montti's observation that high-quality expert coverage (for example, GQ, New York Times) appears only under More → News, making it harder for users to reach. [S1]

Undisputed facts from the article and prior Google communications:

  • Google's AI answer surfaces provide multi-paragraph responses that address several related questions per query.
  • Google representatives state that the underlying ranking infrastructure remains Google Search; LLMs change answer formatting and composition, not the basic indexing pipeline.
  • Google publicly discourages tactics that create separate versions of content for LLMs versus conventional search.

Breakdown and mechanics

At a systems level, three forces matter more than "chunking."

1. Query fan-out and answer consolidation

Historically, one specific query produced one SERP, one snippet, and one main click opportunity.

In the AI pattern, one broader query can be broken into a cluster of related sub-questions, with the LLM synthesizing a single long answer that tries to cover all of them.

In simple terms:

User query
→ Internal expansion into 3-5 sub-questions
→ Retrieval of multiple documents per sub-question
→ LLM synthesis into one answer plus a handful of citations.

Working model: if a topic previously generated four separate mid-tail queries (each with its own SERP, impressions, and clicks), AI can collapse many of those into one query plus one AI pane. That single pane may link to zero to five sites, so total potential traffic per topic shrinks.

2. Source selection for AI answers

Facts:

  • Google has not published exact rules for which URLs are cited in AI answers.
  • Official statements stress similar inputs to classic ranking: relevance, quality, and helpfulness. [S3]

Evidence from Montti's example suggests that AI mode can favor:

  • Platforms with strong generic authority (for example, Medium, LinkedIn).
  • Retailers or niche sites that rank for a narrow fashion term, even if they lack broader editorial authority.

Well-known publishers can be pushed into vertical tabs (News) that are only visible after extra clicks, while AI surfaces lean on more "web-like" sources.

Speculation (clearly marked):

  • The AI module likely optimizes for coverage of sub-questions, semantic diversity, and safety, not just brand prominence.
  • This can favor generic platforms and commercial sites whose pages happen to align with conversational phrasing, even if they are weaker on subject-matter depth.

3. Long-form answers vs. traffic incentives

Google's business model rewards higher engagement on search pages while still driving some traffic, especially via ads.

Long-form AI answers keep users on the SERP longer and can reduce the need to click for basic advice or styling inspiration.

If the AI pane answers "how to style a sweatshirt" plus several follow-ups, then:

  • Fewer follow-up searches are issued.
  • Fewer impressions are created for related long-tail combinations.
  • Expert pages that once captured those long-tail searches now compete for one or two links in the AI module.

The net effect: AI search shifts value from many long-tail query visits to a smaller set of aggregated answer impressions, with limited, sometimes low-quality citation slots.

Impact assessment

Organic search and content strategy

Direction: Negative for many informational queries; moderate-to-high scale over time.

Winners:

  • Big platforms (LinkedIn, Medium, YouTube, Reddit-like communities) that often rank for conversational queries and are "safe" defaults.
  • Large retailers and marketplaces whose content mixes product and light advice.

Losers:

  • Expert publishers, magazines, and niche blogs whose strength is depth, not sheer volume of indexed URLs.
  • Affiliate and review sites whose content overlaps with AI's "conversational explanation" space.

Practical implications:

Topic-level planning, not page-level tricks: Chunking for LLMs is a distraction. The more important move is to structure your site so that:

  • Each key topic has a strong hub page that can reasonably answer the main query plus common follow-ups.
  • Supporting pages cover specialized angles that AI is unlikely to answer fully (original data, strong opinion, tools, calculators, downloads).

Format differentiation: Focus on formats AI cannot easily summarize without losing value:

  • Proprietary data and benchmarks (for example, your own fashion fit tests, user surveys).
  • Interactive experiences (configurators, quizzes, calculators).
  • High-signal visuals and video, which AI often paraphrases poorly.

Brand search and direct traffic: As generic queries become more zero-click, brand-driven searches and direct visits grow in importance.

  • Track the share of organic traffic from branded queries versus generic queries over time.
  • Treat loss in generic query volume as partly structural, not just "algorithm volatility."

Paid search and Shopping

Direction: Mixed; more dependence on paid, higher competition on high-intent queries.

  • More value in lower-funnel keywords: AI answers mainly absorb upper and mid-funnel discovery queries. High-intent terms ("buy [product]", "best price on [product]") remain ad-heavy and closer to classic SERPs.
  • CPC pressure: As organic exposure for discovery shrinks, more brands can push budget into paid for those same users. Expect higher competition on product-adjacent queries where AI still appears but ads are visible above or beside the AI module. Monitor impression share shifts for branded terms as competitors hedge against their own generic traffic losses.
  • Creative and landing pages: Ad copy should assume some users already saw a summary in AI. Focus on unique value not present in generic summaries (exclusive collections, guarantees, service level, real styling photography).

Measurement and operations

Direction: Reporting complexity increases; need for new KPIs.

  • Metrics to track:
    • Total impressions versus clicks for key informational queries in Search Console.
    • Changes in click-through rate where your average position is stable but traffic is dropping (a likely sign of AI module cannibalization).
    • Share of traffic by channel: organic, paid, direct, social, referral. Treat organic informational traffic as a partially declining asset in forecasts.
  • Content operations:
    • Build content calendars around topic clusters and "jobs to be done" instead of standalone keywords.
    • Accept that some pieces primarily feed AI visibility and brand familiarity (being cited or mentioned) rather than direct click volume, and model them as such.

Scenarios and probabilities

Base case - AI answers stay central, quality improves, traffic share stabilizes (likely)

  • Google keeps AI answer modules as standard for broad informational queries but continues tuning quality and safety.
  • More authoritative publishers appear in citations over time, yet click-through remains lower than pre-AI.
  • Marketers treat AI search as a semi-zero-click channel and rebalance toward brand building, differentiated formats, and paid reinforcement.

Upside - Google relaxes AI prominence or pushes it behind a tab (possible)

  • Due to user or regulator pressure, Google scales back default AI modules on some queries, or moves full AI experiences behind a click (for example, a dedicated "AI" tab), leaving more classic SERPs as the default.
  • Traffic to expert sites partially recovers on high-intent informational queries.
  • Investments in content depth and authority yield better direct returns again, not just indirect AI citations.

Downside - deeper Gemini-like integration, more zero-click behavior (edge)

  • Google rolls out even more Gemini-style conversational experiences as the default for many categories, pushing web results further down.
  • AI modules absorb not only "how to style a sweatshirt" but also semi-commercial journeys ("which sweatshirt to buy for...") with built-in product recommendations.
  • Organic discovery traffic for advice-heavy brands drops sharply; paid and non-search channels become mandatory to maintain volume.

Approximate probabilities:

  • Base: ~60%
  • Upside: ~20%
  • Downside: ~20%

These are directional estimates, not forecasts based on hard usage data.

Risks, unknowns, limitations

  • Sample bias: Montti's example uses one styling query. While many SEOs report similar patterns, this is still anecdotal, not a full dataset. [S1]
  • Opaque algorithms: Google does not disclose detailed AI citation logic, so explanations of why Medium, LinkedIn, or retailer pages are chosen remain partly speculative.
  • Rapid product changes: AI search surfaces are evolving; rollout patterns, UI placement, and ranking adjustments can change traffic dynamics faster than historical SEO shifts.
  • Measurement gaps: Search Console currently does not cleanly separate impressions from AI modules versus classic results. This limits the ability to attribute traffic loss precisely to AI answers.
  • Temporal constraint: This analysis relies on the article provided and public information available at the time of writing, so later Google changes are not reflected.

Falsifiers that would change the assessment

  • Strong, broad-based data showing that AI answer surfaces consistently increase click volume to expert publishers versus traditional SERPs.
  • Clear evidence that, across many queries, AI citations are systematically skewed toward high-expertise sites rather than generic platforms.
  • Product moves that relegate AI answers to secondary tabs while keeping classic web results dominant.

Sources

  • [S1] Roger Montti, 2026, article - "Google Downplays GEO - But Let's Talk About Garbage AI SERPs", Search Engine Journal.
  • [S2] Google, 2012, blog post - "Introducing the Knowledge Graph: things, not strings."
  • [S3] Google Search Help, various dates, support docs - "How Search works" and "About AI Overviews in Search."
Quickly summarize and get insighs with: 
Author
Etavrian AI
Etavrian AI is developed by Andrii Daniv to produce and optimize content for etavrian.com website.
Reviewed
Andrew Daniv, Andrii Daniv
Andrii Daniv
Andrii Daniv is the founder and owner of Etavrian, a performance-driven agency specializing in PPC and SEO services for B2B and e‑commerce businesses.
Quickly summarize and get insighs with: 
Table of contents