Google VP of Product Robby Stein outlined the next wave of Google Search in a recent podcast interview, centered on three components: AI Overviews, multimodal search via Google Lens, and a conversational AI Mode. He explained how complex queries and visual inputs connect to conversational results.
Google explains the next generation of AI search
Stein said Google is aligning Search around those three parts. AI Mode supports conversational interactions tailored for Search and draws on web content plus Google's structured knowledge sources.
Key details
- Speaker: Robby Stein, VP of Product, Google - interview on Lenny's Podcast.
- AI Overviews: described as the "quick and fast AI" that appears at the top of results.
- Multimodal search: Google Lens is a major entry point, and camera-based searches in the Google app continue to grow.
- AI Mode: "brings it all together" into a conversational search experience powered by web content and structured knowledge.
- Complex, multi-sentence queries can trigger an AI Mode preview in Search. A "Show more" link opens a deeper conversational flow.
- Access: available at google.com/ai and integrated across core Search surfaces.
- Continuity: users can move from AI Overviews into AI Mode with follow-up questions, and Lens queries can route into AI Mode.
- Data sources include the Shopping Graph, Maps, finance information, and broader web context.
- Product direction: Google aims for a "consistent, simple product experience" across entry points so users do not need to choose a separate interface.
Background
Google began testing generative AI in Search in 2023 through Search Labs under the Search Generative Experience. In May 2024, AI Overviews launched in the United States. Google Lens has supported visual search in the Google app for years. Google also maintains large-scale knowledge assets that feed Search, including product data for Shopping and places data for Maps.