AI assistants are referencing original news reports and brand-owned newsrooms far more than syndicated press releases or wire distribution copies, according to a 4 million citation dataset from BuzzStream and Citation Labs [S1].
AI search citations: press releases at 0.04% in 4M-answer BuzzStream study
Executive snapshot
- News content accounted for 14% of all AI citations across ChatGPT, Google AI Mode, Google AI Overviews, and Google Gemini [S1].
- Press releases distributed via syndication channels (for example, Yahoo, MSN) represented just 0.32% of news citations and 0.04% of all citations [S1].
- Direct newswire URLs (for example, PRNewswire) made up 0.21% of the total dataset, peaking at 0.37% of citations for exploratory or informational prompts [S1].
- Original editorial reporting represented 81% of news citations, with affiliate and review content filling the remainder. Affiliate content reached 39% share in evaluative prompts [S1].
- Corporate newsroom content on brand domains comprised 18% of ChatGPT citations but only about 3% on Google AI products [S1].
Implication for marketers: AI visibility appears far more tied to earned editorial coverage and strong brand newsrooms than to wire-distributed press releases.
Method and source notes
- Primary study: BuzzStream report using Citation Labs XOFU citation monitoring tool [S1].
-
What was measured:
- 4 million citations used by AI systems in answers [S1].
- Platforms: ChatGPT, Google AI Mode, Google AI Overviews, Google Gemini [S1].
-
3,600 prompts across 10 industries, grouped as:
- Evaluative (for example, "Is Sony better than Bose?").
- Informational.
- Brand awareness (for example, "What is Chase known for?") [S1].
- Timeframe: One week collection window [S1].
-
Key classifications:
- News vs non-news content.
- Within news: original editorial, affiliate or review, syndicated articles, syndicated press releases, direct wire URLs, corporate newsrooms [S1].
- Syndicated content flagged via ListIQ author-publication cross-checks and manual review. BuzzStream notes some reposted press releases are hard to detect [S1].
-
Contextual sources:
- Press release distribution platforms promoting "AI visibility" messaging [S2-S5], including services marketing AI visibility as a selling point, a guide positioning press releases as tools for AI search visibility, guidance on optimizing releases for answer engine discovery, and playbooks that market press releases as a path to AI visibility.
- BuzzStream analysis of publisher blocking of AI bots [S6].
- Hostinger analysis of 66 billion bot requests and AI or search crawler trends [S7].
- Google Search VP commentary on digital PR and AI recommendations [S8].
- Ask an SEO column on digital PR vs traditional link building [S9].
-
Limitations:
- Single week of data. AI systems and policies change frequently [S1].
- Query set centered on larger, well-known brands that already attract more editorial coverage [S1].
- Syndicated press releases are not always tagged or labeled clearly, so some may be misclassified [S1].
- Primary study produced by BuzzStream, a company that sells digital PR tools and benefits from findings that favor earned media [S1].
AI search citations and press releases
Across the full dataset, news content represented 14% of all AI citations, meaning most AI answers drew from non-news sources such as reference sites, brand sites, and other web content [S1]. Within that 14%, syndicated press releases and wire copies accounted for only a small fraction of what AI systems chose to cite.
Press releases republished through syndication networks like Yahoo and MSN accounted for 0.32% of all news citations and just 0.04% of the entire 4 million citation corpus [S1]. When AI systems reference news-type material, they rarely pick syndicated press release copies as their source of record.
Direct citations to newswire domains such as PRNewswire constituted 0.21% of all citations captured in the study [S1]. These direct wire URLs appeared most often during exploratory or informational prompts, but even in that context they made up only 0.37% of citations [S1].
Syndicated news content more broadly, including republished articles on networks like MSN and Yahoo, accounted for 6.2% of news citations, or 0.9% of the entire dataset [S1]. That is still a small share compared with original reporting on publisher domains and underscores how weak syndicated content appears as a source of citations for AI answers relative to original editorial sources.
Interpretation (likely)
For AI search visibility specifically, wire distribution and syndication networks deliver minimal direct citation value compared with other content types. The primary value of press releases may lie elsewhere, such as regulatory disclosure or prompting journalist outreach, rather than as sources AI systems choose to reference.
How AI assistants treat syndicated news and wire-distributed content
BuzzStream's classification work separated pure press release syndication from broader article syndication [S1]. Among all news citations, syndicated news stories (for example, a reporter's article redistributed via Yahoo News) captured 6.2%, while republished press releases captured only 0.32% [S1]. Both figures are small next to original publisher content, but press releases were especially rare.
To detect syndication, BuzzStream used its ListIQ tool to cross-reference author names and publications, flagging mismatches as potential syndication and then manually confirming cases [S1]. The company notes that some sites repost press release text without clear labels, so a portion of press release content could still appear under other categories [S1]. Even so, the upper bound remains very low relative to editorial citations.
Press releases distributed by wire services and then picked up by aggregator sites, commonly promoted as a way to "expand reach", do not appear to be the version AI systems prefer. The syndication copies on portals like Yahoo Finance and MSN almost never surfaced as cited sources in this dataset [S1].
Original editorial coverage and affiliate content in AI search results
The same dataset shows that original editorial reporting dominates AI news citations. Across all four AI platforms, original editorial news accounted for 81% of news citations [S1]. The remaining 19% consisted mainly of affiliate and review content [S1].
Prompt type influenced the mix. Evaluative prompts, for example comparing brands or products, generated the highest share of news citations overall at 18% of all citations in the study [S1]. In that evaluative segment, affiliate and review content was particularly prominent, reaching 39% of cited news sources [S1]. Brand awareness prompts generated only 7% of all citations from news publishers, with informational prompts between those two extremes [S1].
Outlets that appeared frequently in evaluative citations included Reuters, CNBC, and CNET, often in the form of cost analyses, head-to-head comparisons, and performance breakdowns [S1]. This suggests AI systems tend to select sources that contain explicit comparative or analytical material when asked to evaluate or compare options.
Interpretation (likely)
Earning detailed editorial coverage, especially pieces that compare products, outline trade-offs, or break down pricing, appears significantly more influential for AI answers than distributing press releases. Affiliate and review publishers also play a meaningful role specifically for evaluative queries.
Corporate newsrooms and internal press releases in ChatGPT
One of the clearest platform-specific differences surfaced in how ChatGPT handled brand-owned newsroom content versus syndicated press releases. Across ChatGPT responses in the dataset, internal press releases and corporate newsrooms on company domains accounted for 18% of citations [S1]. On Google AI Mode, AI Overviews, and Gemini, that figure dropped to about 3% [S1].
BuzzStream reported concrete examples: when asked about Iberdrola's role in renewable energy, ChatGPT cited Iberdrola's own corporate press room. When asked about Target's products, ChatGPT referenced a 2015 press release on Target's corporate subdomain [S1]. These internal newsroom pages were cited even though equivalent or similar announcements existed via wire distribution or syndication [S1].
Aside from this newsroom effect inside ChatGPT, most other citation patterns looked similar across platforms [S1]. That suggests a general preference for original or primary sources on brand domains in ChatGPT answers when such sources are available and accessible to the model.
Interpretation (tentative)
For brands that maintain active corporate newsrooms, ChatGPT appears more willing than Google's AI products to use those pages as authoritative references. Investing in well-structured newsrooms that clearly document product updates, strategy, and key facts may improve the odds of citation in ChatGPT answers, even if syndicated copies exist elsewhere. Evidence for similar behavior on Google AI products is weaker in this dataset.
AI crawling access, publisher blocks, and effects on citations
BuzzStream's earlier work and independent hosting data provide context on why certain content types might appear more often in AI citations. A separate BuzzStream related analysis, covered by Search Engine Journal, found that 79% of major news publishers block at least one AI training bot, and 71% block retrieval bots that power answer engines [S6]. This means large portions of premium news inventory may be partly or fully inaccessible to some AI systems.
Hostinger's study of 66 billion bot requests observed that AI training crawlers were losing relative access while search engine bots expanded their reach [S7]. Another Hostinger analysis reported OpenAI's search crawler achieving more than 55% coverage of sites on its platform, showing that coverage can vary widely between bots and over time [S7].
Interpretation (tentative)
If prominent news publishers block specific AI crawlers, AI systems may compensate by leaning more on mid-tier or long-tail publishers still open to bots, corporate sites and brand newsrooms that permit crawling, and affiliate and review sites that often allow broad bot access.
In that environment, syndicated press releases hosted on third-party portals still appear rarely in citations, suggesting that simple accessibility is not sufficient. Content type, original authorship, and perceived authority seem to matter more than syndicated distribution alone [S1, S6-S7].
Implications for PR and marketing strategy
The following points interpret the data for strategy; they are not direct measurements from the cited studies.
Likely implications
- Wire distribution has limited direct AI citation value. With syndicated press releases at 0.04% of total citations and direct wire URLs at 0.21%, AI systems seldom reference press release copies as sources [S1]. That weakens claims from distributors that syndication itself drives AI visibility [S2-S5].
- Earned editorial coverage is a primary AI visibility driver. Original reporting provides 81% of news citations [S1]. Campaigns that generate in-depth coverage in recognized publications are likely to influence AI answers more than broad wire syndication alone [S1, S8-S9].
- Evaluation-friendly content supports high-intent queries. Evaluative prompts both triggered the highest share of news citations (18% of all citations) and showed strong representation from comparative and review-style pieces [S1]. Brands in high-consideration categories benefit when third-party publishers produce detailed comparisons and cost breakdowns.
Tentative implications
- Corporate newsrooms help more on ChatGPT than on Google AI. ChatGPT's 18% newsroom citation share versus about 3% on Google's AI products indicates measurable but platform-specific value [S1]. For ChatGPT exposure, structured press pages and fact sheets on brand domains appear helpful; the same effect on Google remains relatively modest.
- Digital PR outperforms link-only tactics for AI exposure. Google's VP of Search compared AI recommendations to how a person researches, emphasizing mentions and coverage rather than isolated links [S8]. Adam Riemer has argued that digital PR focused on credible stories and placements delivers broader benefits than pure link building [S9]. The AI citation data aligns with that orientation toward substantive coverage over mechanical distribution [S1, S8-S9].
Speculative implications
- Smaller brands may depend more on corporate sites while they build press portfolios. Because the dataset focused on well-known brands, it likely skews toward companies already covered by major publishers [S1]. For less-covered brands, AI systems may rely more heavily on brand-owned sites until press coverage accumulates, especially on platforms like ChatGPT that already show a strong newsroom bias.
- Press releases may serve mainly as inputs, not outputs. Even when AI systems do not cite press releases, they may still use them during training or retrieval as background knowledge. The study measured only citations, not training data composition [S1].
Conflicting evidence, blind spots, and open questions
- Short measurement window. The one week timeframe means seasonal events, product launches, or news spikes could distort category-level patterns [S1]. AI products are also evolving quickly, so citation behavior may shift.
- Brand and sector bias. The 3,600 prompts targeted 10 industries and leaned toward large brands [S1]. Sectors with weaker press ecosystems or heavy regulatory communication might see different dynamics, especially where trade wires or niche outlets carry more weight.
- Press release misclassification. Because some publishers repost press releases without labeling them, a portion of press release content could be classified as syndicated news or even original news [S1]. This could mean the actual contribution of press release text to AI answers is somewhat higher than the very low numbers reported for labeled press releases and direct wire URLs.
- Publisher blocking patterns. While we know many major outlets block certain AI bots [S6], we do not have a detailed mapping of which AI products have access to which news sources at any given time. That gap makes it harder to separate "AI does not want to cite this" from "AI cannot access this".
- Lack of outcome-level metrics. The current data describes citation shares, not user impact. There is no direct link between a citation and traffic, brand lift, or conversions, so marketers still need separate analytics to judge commercial impact.
Overall, the evidence points to a clear hierarchy for AI citations: original editorial and some affiliate content first, corporate newsrooms as a secondary source (especially on ChatGPT), and syndicated press releases near the bottom. How that hierarchy translates into business results remains an open measurement task.
Data appendix: key figures from the BuzzStream AI citation study
| Metric | Value | Notes |
|---|---|---|
| Total AI citations analyzed | 4,000,000 | Across ChatGPT, Google AI Mode, AI Overviews, Gemini [S1] |
| Total prompts | 3,600 | Across 10 industries [S1] |
| Share of all citations from news publications | 14% | News vs non-news split [S1] |
| Share of news citations from original editorial | 81% | Remaining mostly affiliate or review [S1] |
| Share of news citations from syndicated news articles | 6.2% | For example, articles republished via MSN or Yahoo [S1] |
| Syndicated news as share of all citations | 0.9% | 6.2% of 14% [S1] |
| Share of news citations from syndicated press releases | 0.32% | Identified via syndication channels [S1] |
| Syndicated press releases as share of all citations | 0.04% | Very limited direct presence [S1] |
| Direct wire (for example, PRNewswire) share of all citations | 0.21% | All prompt types [S1] |
| Direct wire share in exploratory or informational prompts | 0.37% | Segment peak [S1] |
| News citation share in evaluative prompts | 18% of all citations | Highest among prompt types [S1] |
| News citation share in brand awareness prompts | 7% of all citations | Lowest among prompt types [S1] |
| Affiliate or review share of evaluative news citations | 39% | Within news category [S1] |
| Corporate newsroom share of ChatGPT citations | 18% | Brand-owned newsrooms or press rooms [S1] |
| Corporate newsroom share on Google AI products | ~3% | AI Mode, AI Overviews, Gemini [S1] |
| Major news publishers blocking at least one AI training bot | 79% | Separate BuzzStream related study [S6] |
| Major news publishers blocking retrieval or answer engine bots | 71% | Same study [S6] |
| OpenAI search crawler coverage in Hostinger dataset | >55% of sites | Hostinger analysis [S7] |
Sources
- [S1] BuzzStream - AI citations report using Citation Labs XOFU tool (4M citations, 3,600 prompts, 10 industries).
- [S2] ACCESS Newswire - "Press Release Distribution in the AI Era: The New Reality of SEO and AI Visibility."
- [S3] eReleases - "Press Releases to Boost AI Search Visibility: 2025 SEO Guide."
- [S4] Business Wire - "Optimize Press Release SEO & AEO for Answer Engine Discovery."
- [S5] OBAPR - "SEO vs AEO vs GEO: Guide to Press Release Strategy for Maximum AI Discoverability (2026)."
- [S6] Search Engine Journal - Coverage of BuzzStream report on major news publishers blocking AI training and retrieval bots.
- [S7] Search Engine Journal - Coverage of Hostinger analysis of 66 billion bot requests and OpenAI crawler coverage.
- [S8] Search Engine Journal - Interview with Robby Stein, VP of Product for Google Search, on digital PR and AI recommendations.
- [S9] Search Engine Journal - Adam Riemer, "Ask an SEO: Digital PR or Traditional Link Building - Which Is Better?"






