Google VP of Product Robby Stein outlined how AI is evolving Google Search in a recent podcast, highlighting three components - AI Overviews, multimodal search with Google Lens, and a conversational AI Mode - and how they appear across Search and the Google app.
The three components of AI in Google Search
- AI Overviews - quick AI-generated summaries at the top of results to help users get up to speed faster.
- Multimodal search with Google Lens - lets people combine images and text in a single query, including identifying objects, translating text, and layering words onto image-based searches.
- AI Mode - a conversational, turn-based experience built for Search and information discovery that draws on web content and structured knowledge.
How the features work together
According to Stein, AI Overviews and Lens can hand off into AI Mode for follow-up questions. Complex or multi-sentence queries may show an AI Mode preview that opens a conversational experience. The design aims to feel consistent across entry points in Search and the Google app, and typed, spoken, or image-based inputs all connect to the same underlying AI system.
Access and availability
People can access the experience at Google AI and within core Google Search. Google continues to publish Search feature news on its official updates blog at Google Search product updates.
Background and rollout
AI Overviews began rolling out widely in 2024, and Google has said its Gemini models power AI features in Search. Developers can find guidance and documentation in Google Search Central. Google introduced multisearch in 2022, and Lens works across the Google app, Android, iOS, and Chrome. Google’s Shopping systems track product listings and pricing updates across retailers, and Google Maps maintains an index of businesses and places that also surfaces in Search.