Google Search Advocate John Mueller has clarified what the "Page indexed without content" status in Search Console typically means. In a recent technical SEO discussion, he linked the issue primarily to servers or CDNs blocking Googlebot, in a case where a homepage lost significant rankings.
Google's Mueller Explains 'Page Indexed Without Content' Status - Key Details
Mueller responded to a site owner who reported a sudden ranking drop after this status appeared in Search Console. The post described a homepage falling from position one to position fifteen in Google search results for a key query. The discussion took place on Reddit in the r/TechSEO subreddit.
- The site uses Webflow as its content management system and Cloudflare as its content delivery network.
- The site owner said they had not changed the homepage or templates before the issue started.
- They tested the page with curl using a Googlebot user agent and reported that it returned the expected HTML content.
- They also used Google's Rich Results Test and saw desktop inspection errors reading "Something went wrong," while mobile tests succeeded.
Mueller wrote that the "Page indexed without content" status usually means the server or CDN is blocking Google from receiving any content from the URL. He stated that the problem is "not related to anything JavaScript" and that the block is often low level, sometimes based on Googlebot IP addresses rather than user agent strings.
He noted that this type of blocking can be impossible to reproduce outside Search Console testing tools. Mueller added that affected pages may start dropping out of Google's index and called the situation urgent for site owners to diagnose and fix.
Background Context
In Search Console, "Page indexed without content" appears when Google indexes a URL but cannot retrieve the page content. Google documentation has described the similar "Indexed, without content" status as meaning Google could not read the page content. That documentation also notes that the status does not result from robots.txt blocking.
Google's URL Inspection tool shows how Googlebot retrieved a specific URL during the last crawl, including HTTP response and rendered HTML. The Live URL Test feature can fetch the page in real time from Google's systems. Google Search Central describes both tools as ways to diagnose indexing and crawling problems, including possible server or CDN blocking.
Mueller has previously addressed server and CDN blocking in public replies to site owners. In earlier cases, he associated sudden crawling drops across several domains with shared infrastructure problems and advised site owners to check whether CDNs or servers were blocking Googlebot when crawl activity declined.
Source Citations
The following primary sources support the details in this report. They include Mueller's direct comments and Google's own documentation.
- Reddit - r/TechSEO thread "Strange issue, Page Indexed without Content" featuring comments by John Mueller. Available at reddit.com.
- Google Search Central documentation for the Page indexing report and coverage statuses, including "Indexed, without content".
- Google Search Console help material describing how the URL Inspection tool reports what Googlebot retrieves from a page.






