Etavrian
keyboard_arrow_right Created with Sketch.
News
keyboard_arrow_right Created with Sketch.

One fake Google core update exposed a much bigger search problem

Reviewed:
Andrii Daniv
11
min read
Mar 18, 2026
Minimalist illustration showing AI answer influencing search results fake update toggle analytics shield

A recent SEO case study shows that a single, fabricated "Google core update" can rank on page one and be echoed by AI Overviews and third-party sites with almost no resistance from current systems. This has clear implications for how marketers treat search and AI summaries as information sources, especially around time-sensitive platform changes.

SEO test on Google misinformation - how a fake "March 2026 Core Update" ranked and spread

This report analyses an experiment by SEO practitioner Jon Goodey (LinkedIn profile), who allowed an AI-generated hallucination about a fictional "Google March 2026 Core Update" to remain in a LinkedIn newsletter and then tracked how it propagated across Google Search, AI Overviews, and industry publications. [S1][S2]

The case shows that:

  • A single, unverified LinkedIn article can rank on page one for a timely search term about Google algorithm changes.
  • Google's AI Overview can repeat fabricated details as fact when based on ranking content.
  • Industry and tech sites may repeat such claims without checking primary sources.
  • Google continues to resist structural fact-checking requirements in its ranking systems, based on its correspondence with EU regulators. [S3]
SEO Test Shows It’s Trivial To Rank Misinformation On Google
An SEO test illustrates how easily misinformation can rank and spread in Google Search.

The goal of this report is to surface what is known from this test and related primary statements, separate observation from interpretation, and outline practical implications for SEO, PPC, and content strategy.

Executive snapshot of the misinformation SEO test

This section summarises the main data points from the experiment and related policy statements.

  • One AI hallucination. Goodey's AI workflow introduced a nonexistent "Google March 2026 Core Update" into a LinkedIn newsletter. He usually runs human quality checks but chose to keep this error as a test of how misinformation travels. [S1]
  • Page-one ranking. The LinkedIn article ranked on the first page of Google for "Google March update 2026," visible to anyone searching for recent Google algorithm changes. [S1]
  • AI Overview repetition. Google's AI Overview surfaced the fabricated update and presented it as fact, summarising claims from the LinkedIn article. [S1]
  • Third-party amplification. Multiple websites, including at least one technology publication (TechBytes), published detailed articles treating the fake update as real, adding invented concepts such as "Gemini 4.0 Semantic Filter," "Zero Information Gain" classifications, and a "Discover 2.0 Engine." [S1][S2]
  • Limited reader verification. Goodey reports that only a few commenters questioned the story, suggesting that a minority of readers checked external sources. [S1]
  • No systemic fact-checking in ranking. In a letter to the European Commission, Google's global affairs president Kent Walker stated that integrating fact-checking results into ranking "isn't appropriate or effective" and that Google would not commit to the EU's Disinformation Code of Practice requirements on this point. [S3]

Implication for marketers: Treat unverified search results and AI summaries about platform updates or technical topics as untrusted until cross-checked against primary, authoritative sources.

Method and source notes for the Google misinformation case study

What was measured

The experiment tracked the propagation of a single piece of deliberately false information, the "Google March 2026 Core Update," from a LinkedIn newsletter into:

  • Google's classic search rankings
  • Google's AI Overview
  • Third-party SEO and technology websites

It also noted observed user behaviour in the form of comments and challenges to the false claims. [S1][S2]

Primary and near-primary sources

  • [S1] Jon Goodey's subsequent Linkedin post "I created a fake Google update and tracked where it went," describing:
    • His AI content workflow
    • The decision to keep the hallucinated update
    • Subsequent ranking and content propagation
    • Qualitative observations of audience reaction
  • [S2] Roger Montti's Search Engine Journal article "SEO Test Shows It's Trivial To Rank Misinformation On Google," which documents the experiment, quotes Goodey, and shows a screenshot of Google recommending a black-hat "Google stacking" tactic in search results.
  • [S3] A news report published in Axios citing a letter from Kent Walker (Google) to EU official Renate Nikolay, stating that Google would not integrate fact-checking results into ranking or commit to the EU Disinformation Code of Practice requirements for search and YouTube.

Methodology

  • Natural experiment / case study: one real-world content item tracked over time, with no controlled A/B test.
  • Qualitative, with limited quantitative data (no impressions, clicks, or rankings over time reported).

Key limitations

  • Single case in the SEO and tech niche; not a statistically representative sample.
  • Reliance on Goodey's reporting for timelines and audience behaviour.
  • No detailed visibility into Google's internal systems or ranking signals for this query.
  • The Axios report covers policy stance, not specific ranking outcomes.

Findings on how misinformation ranked in Google Search and AI Overviews

1. Creation and intentional retention of the fake update

Goodey used AI to draft a LinkedIn newsletter. The AI system introduced a hallucinated reference to a "Google March 2026 Core Update." [S1] His standard workflow includes human review to remove inaccuracies, but in this case he noticed the error and chose to keep it to observe what would happen if a fabricated update was published without correction. He framed this as an experiment in how misinformation spreads in the search and SEO ecosystem. [S1]

The newsletter was published on LinkedIn as a long-form article or newsletter post, with enough apparent technical detail to look credible to readers familiar with SEO topics. [S1][S2] At this stage, the only source of the misinformation was Goodey's own content.

2. Google Search ranking of the fabricated update

After publication, Goodey reports that his LinkedIn article began ranking on the first page of Google for "Google March update 2026." [S1] The query targeted a date-specific, news-style search term where there would be high user curiosity but, at that time, no genuine announcements or documentation from Google because the update did not exist. [S1][S2]

For an empty or low-competition query such as a future-dated "core update," the LinkedIn article may have been among the only semi-relevant documents available. Google surfaced it prominently, giving the impression that the claims it contained were grounded in reality. [S1] Montti notes that Google's general approach avoids fact-checking content in search, treating ranking as an exercise in relevance and signals of authority rather than truth verification. [S2]

3. AI Overview repeating misinformation as fact

Goodey reports that Google's AI Overview system pulled in the fabricated claims and presented them in the answer box as if they were factual. [S1] This means:

  • The generative AI model ingested one or more sources (including the LinkedIn article) that mentioned the fake update.
  • The model then produced a summary that reinforced the narrative of a real "March 2026 Core Update," without any disclaimer that the update was unconfirmed or disputed. [S1]

Because AI Overviews sit above organic listings and are visually prominent, this representation effectively upgraded a single LinkedIn post from one search result among many to a highlighted, answer-like statement. This is consistent with Montti's broader observation that using Google for SEO queries "is like playing a slot machine," with accuracy varying significantly and limited checking of the underlying claims. [S2]

4. Third-party SEO and tech sites echoing and embellishing the story

Goodey notes that multiple websites published articles describing the "March 2026 Core Update" as if it were a verified event. [S1] These included:

  • SEO-focused sites that treat core update coverage as a traffic driver and lead-generation tool.
  • At least one technology publication, TechBytes, whose article added its own invented technical elements such as "Gemini 4.0 Semantic Filter," a "Zero Information Gain" system, and a "Discover 2.0 Engine" focused on long technical narratives. [S1][S2]

These derivative articles did not just repeat Goodey's initial claims; they extended them with additional jargon and supposed mechanisms, lending further false authority to the original fabrication. [S1][S2] Montti points out that there is a history in parts of the SEO community of speculating about unconfirmed or nonexistent updates and using that speculation to attract attention and clients. [S2]

5. Limited user-side fact-checking

Goodey reports that only a few commenters on his content raised doubts or challenged the claims about the non-existent update. [S1] The majority either accepted the information or engaged with it without verifying it against primary sources such as:

  • Google's official Search Status Dashboard
  • Google Search Central blog posts
  • Statements from recognised Google search representatives

This pattern aligns with what is known from other contexts: when a story looks plausible and fits expectations, many readers do not routinely check outside references before sharing or acting on it. In this case, that behaviour helped the fake update circulate across social and search-driven channels. [S1][S2]

6. Google's stance on fact-checking and existing examples of SEO misinformation in search

Montti references a screenshot where Google's search results highlight "Google stacking," a black-hat SEO tactic, in a way that effectively validates the approach for users who are searching for it. [S2] He uses this as an example of how search results can put questionable or policy-violating tactics in front of users, especially when they query such tactics directly.

The article also cites an Axios report describing a letter from Google's Kent Walker to EU official Renate Nikolay. In that letter, Walker states that:

  • Google does not view integrating fact-check results into search rankings and YouTube as appropriate or effective.
  • Google will not commit to the EU's Disinformation Code of Practice requirement to build fact-checking into ranking systems.
  • Google prefers its existing moderation tools and points to its handling of prior election cycles and features such as contextual notes on YouTube as evidence of its approach. [S3]

This policy stance helps explain why the ranking and AI systems treated the fake update like any other piece of content, focusing on relevance and authority signals rather than factual validation.

Interpretation and implications for SEO, PPC, and content decisions

Likely implications

  • Search and AI summaries are not reliability filters. Given Google's explicit rejection of integrated fact-checking and the observed behaviour in this case, it is likely that search and AI Overviews will continue to surface unverified claims that match new or low-competition queries, including in technical and high-impact domains. [S1][S2][S3]
  • Low-competition, news-style queries are high-risk. Terms such as "[platform] [month] update [year]" can be occupied by the first semi-plausible content published. For marketers, reacting to unverified "update" stories surfaced in search or AI Overviews is likely to lead to unnecessary strategy shifts or panic.
  • AI-assisted content workflows need hard validation gates. Goodey's workflow normally includes human review; the test shows what happens when that safeguard is intentionally bypassed. For teams using AI to draft blogs, landing pages, or thought-leadership content, consistent specialist review before publication is likely to be necessary to avoid propagating errors or hallucinations. [S1]

Tentative implications

  • Reputation and trust risk for agencies and publishers. Sites that rapidly publish authoritative-sounding posts on unconfirmed updates risk eroding long-term trust, even if they capture short-term traffic. Over time, marketers may rely more heavily on outlets with a track record of explicitly sourcing Google statements and data. [S2]
  • Need for internal verification standards. Marketing and product teams may benefit from simple internal rules such as "no strategic change based solely on a single third-party article or AI summary about an algorithm update," requiring at least one primary or well-established source.

Speculative implications

  • Growing value of direct, first-party communication channels. If search and AI summaries remain prone to misinformation in emerging topics, brands that publish timely, accurate guidance on their own channels (and optimise it for relevant queries) may gradually become the reference point that displaces weaker sources in search over time.
  • Potential regulatory or product changes. As EU and other regulators focus more on AI-driven misinformation, tests like Goodey's may feed arguments that current search and AI designs are insufficient, potentially leading to future policy or product shifts. The direction and timing of such changes remain uncertain.

Contradictions, open questions, and what this test does not show

  • Single-case limitation. This is one documented case in one vertical, based on a single LinkedIn article and a handful of derivative posts. It does not measure how often similar misinformation attempts fail to rank or are ignored. [S1][S2]
  • No traffic or engagement metrics. The available reports describe ranking presence and qualitative reactions, but not:
    • Impressions, clicks, or dwell time
    • How many users saw or acted on the misinformation
    • How long the page-one ranking or AI Overview inclusion persisted
  • Limited insight into ranking mechanics. We do not know:
    • Which specific signals (LinkedIn domain authority, engagement, freshness, query intent) contributed most to the ranking
    • Whether any later signals (such as user behaviour or additional sources) caused the ranking to change
  • Unclear cross-vertical generalisation. While the mechanics of ranking and AI Overviews are shared across topics, SEO as a niche has a culture of rapid, speculative commentary on "updates." Other sectors, for example regulated health or finance content, might exhibit different propagation patterns due to stronger editorial standards or existing fact-checking layers.
  • Dependence on secondary reporting. Our view of the experiment comes through Goodey's own recounting and Montti's article, plus the Axios summary of Google's EU correspondence. Independent replication or a dataset of similar tests would be needed for stronger general conclusions. [S1][S2][S3]

For now, the evidence supports a cautious takeaway: search rankings and AI Overviews reflect what is published and appears authoritative, not what has been independently verified as true. Marketers and business owners should adjust their information-gathering and decision processes accordingly.

Quickly summarize and get insighs with: 
Author
Etavrian AI
Etavrian AI is developed by Andrii Daniv to produce and optimize content for etavrian.com website.
Reviewed
Andrew Daniv, Andrii Daniv
Andrii Daniv
Andrii Daniv is the founder and owner of Etavrian, a performance-driven agency specializing in PPC and SEO services for B2B and e‑commerce businesses.
Quickly summarize and get insighs with: 
Table of contents