Google Vice President of Search Liz Reid outlined new large language model (LLM) capabilities for audio, video, and subscription-aware results in a recent interview on the Access Podcast. She described how multimodal, multilingual models are changing what Google Search can process and how results are tailored to individual users.
Key Details on Google's LLM-Based Search Features
Reid said Google's latest LLMs are multimodal and can now interpret audio and video content far more deeply than before, at a level that was not possible for the company several years ago.
- Google's multimodal models can understand audio and video beyond basic transcription.
- The models can identify what a video is about and analyze aspects of its style.
- The systems can take information in one language and generate output in another.
- Reid cited India as an example, where many speakers of Hindi and other languages face limited local-language web content.
- She said Google wants Search results to highlight content from sources users already subscribe to.
- As an example, she described surfacing a subscriber's one accessible interview ahead of many similar paywalled interviews.
Reid contrasted this approach with showing users paywalled links they cannot open, saying Search should prioritize the version of a story that matches a user's existing subscription.
She added that Google has taken only small steps toward this subscription-aware ranking so far. The company aims to strengthen connections between audiences and news sources they already trust.
Reid referenced Google's Preferred Sources setting and features that highlight links from users' paid news subscriptions. She said Google showcases those links in a dedicated carousel in the Gemini app and plans to extend that placement to AI Overviews and AI Mode in Search.
Background Context
Reid's comments build on earlier experiments Google has run around audio search. In 2021, Google and public broadcaster KQED tested making audio programs searchable. They found that speech-to-text accuracy was lacking, especially for proper nouns and regional terms.
Reid said newer multimodal models now understand audio and video much better than during those early tests. She described gains not only in transcription quality but also in recognizing what a video is broadly about.
The podcast also revisited long-standing challenges for users who search in Hindi and other Indian languages. Reid noted that the web often lacks material in those languages and said LLMs can help bridge content across languages by translating and summarizing information.
She also connected subscription-aware features to Google's broader work with publishers. According to Reid, Google has expanded Preferred Sources globally for English-language users. She cited internal figures indicating that people who choose a preferred publication click through about twice as often.
On the business model side, Reid mentioned micropayments for single articles. She said that approach has been discussed for years but has not gained wide adoption.
Looking ahead to product launches, Reid said Google is "actively building" new features for Search and noted that some features may be finalized shortly before public announcements such as Google I/O.
Source Citations
The following official source supports the information summarized above.
- Access Podcast interview with Liz Reid, Google VP of Search






