Google’s Gary Illyes said AI-generated content is acceptable in Search when it is high quality and human reviewed for accuracy. He made the remarks in an interview with Kenichi Suzuki, and also discussed the custom Gemini model behind AI Overviews and AI Mode.

Key statements from Gary Illyes
"Human created" is not precise. "It should be human curated."
- AI content is fine if quality is high and accuracy is verified.
- Publishers should apply editorial oversight before publishing.
- Factual accuracy and originality are core quality factors.
- Extremely similar content should not remain in the index.
- High-quality outputs typically require human review today.
How Google’s AI features work, per Illyes
- AI Overviews and AI Mode use a custom Gemini model. Illyes said he did not know the model’s exact training details.
- For grounding, these features issue multiple queries to Google Search and use web-based data from Google’s index.
- Generation respects Google-Extended controls. If a site disallows Google-Extended, Gemini will not ground on that site’s content.
- Training should avoid AI-generated data to prevent training loops.
Policy background
Google’s published Search guidance prioritizes helpful, reliable, people-first content and evaluates content quality rather than how it was created. See Google Search Essentials.
Spam policies address low-quality or scaled content abuse and related violations. See the Search spam policies.
Google-Extended is a control for site owners covering certain generative uses and does not replace standard Search crawling controls via robots directives.
What this means for publishers
- Use AI to assist, then apply human editorial review before publishing.
- Prioritize accuracy and originality; avoid producing extremely similar pages.
- Review Google-Extended settings if you want to limit grounding on your site’s content.