Generative AI speeds up product discovery and comparisons, but most shoppers still leave assistants to cross-check details on retailer and marketplace pages, reviews, search results, and forums before deciding. A 2025 mixed-methods study by IAB and Talk Shoppe quantifies how often shoppers verify, where they go, and why trust breaks when links fail or information mismatches. This report distills the core numbers and their practical meaning for online retail and brand teams managing product data, content, and channel mix.
AI shopping trust: Executive snapshot
Shoppers report and demonstrate faster, more focused research with AI, but they also add verification steps before they end sessions or commit. The IAB and Talk Shoppe study combines observed behavior and survey responses to show how AI influences mid-journey activity, the jump to retailer pages, and where confidence breaks from query to cart. The topline takeaway for teams is straightforward: assistants spark intent, but trust is won or lost on the pages where details must match what AI suggests. The data below isolates the scale of verification and the specific failure points that send people back into search, forums, and retailer listings.
- Steps in journey: sessions averaged 1.6 steps before using AI vs 3.8 after. 95% of participants took extra steps to feel confident before ending a session.
- Where people verify: 78% visited a retailer or marketplace during the journey. 32% clicked directly from an AI assistant to a retailer page.
- Retailer traffic lift: the share of shoppers visiting retailer sites rose from 20% before AI to 50% after AI exposure in the session.
- Trust level: only 46% of surveyed shoppers fully trusted AI shopping recommendations.
- Friction points: missing links or sources, mismatched specs or pricing, outdated availability, and recommendations outside budget or compatibility needs sent shoppers to retailers, reviews, search, and forums to validate.
Implication for marketers: assistants initiate interest, but consistent product data and content across channels determine whether verification confirms or contradicts what AI suggests.
Method and source notes
Findings come from IAB and Talk Shoppe’s “When AI Guides the Shopping Journey,” which combined more than 450 screen-recorded AI shopping sessions with a U.S. survey of 600 consumers. The work measured pre- and post-AI steps in a session, click-through to retailer or marketplace pages, and stated trust in AI recommendations, along with observed friction points such as broken links and mismatches on price, specs, or availability that prompted external verification.
While the report is U.S.-focused and captures mid-journey behavior rather than conversions, it provides rare observed evidence of how AI changes sequence and destinations in shopping research. Limitations include potential early-adopter bias among AI users, lack of category-specific breakouts, and the fact that “fully trust” is a high bar that may understate partial reliance in lower-risk buys. Timelines and assistant types are not exhaustively detailed in the public summary.
Findings on verification behavior and trust
AI accelerates discovery but extends the path with validation. Observed sessions shifted from 1.6 steps pre-AI to 3.8 steps post-AI, and 95% of shoppers added verification steps before ending their session. These added steps were purposeful, aimed at confirming prices, variants, availability, and reviews on trusted sources before moving forward.
Retailer and marketplace pages are the central verification destination. In the study, 78% of shoppers visited a retailer or marketplace site, with 32% clicking directly from an AI assistant to a retailer page. The share of journeys including retailer visits increased from 20% before AI exposure to 50% after AI, underscoring that assistants feed traffic to product detail pages and comparison content rather than replacing them. Once on retailer pages, shoppers most often validated pricing and deals, available variants, user reviews, and local or online availability.
Trust remains limited, with predictable break points. Only 46% of respondents reported fully trusting AI shopping recommendations. Common triggers for verification or abandonment included missing links or sources in assistant answers, mismatched specs or pricing between AI output and retailer pages, outdated availability, and suggestions that failed budget or compatibility constraints. These breaks typically redirected users back to search engines, retailer listings, review aggregators, and community forums to reconcile details before progressing.
Interpretation and implications for marketing strategy
Likely: AI tools are now a mid-funnel accelerant that produces more retailer-page visits, not fewer. Ensuring consistency between assistant-visible information and what appears on product detail pages is central to keeping verification short and reducing bounce from mismatch errors. This includes price, variant naming, key specs, and availability.
Likely: Feed hygiene and structured data will influence both AI answers and shopper confidence. Keeping product feeds, schema markup for specs, variants, availability, and reviews, and landing-page content synchronized with retailer listings reduces the risk of broken or outdated details that trigger extra steps or abandonment.
Tentative: Build and maintain comparison and alternatives content tied to the attributes people prompt for in assistants such as price caps, compatibility, and specific features. When assistant-generated shortlists drive 32% of clicks directly to retailer pages, matching that framing on landing pages can minimize contradictions and speed decisions.
Tentative: Monitor and address objections seen in reviews and forums. Since mismatches send shoppers to these sources, closing those gaps with clear specs, compatibility guides, and up-to-date inventory signals may reduce verification loops and improve conversion on retailer pages.
Speculative: As assistants integrate commerce and product feeds more deeply, retailers and brands that supply stable, linkable references with authoritative specs may see higher inclusion rates and cleaner click-through, but rigorous measurement of assistant-sourced traffic and conversion remains immature.
Contradictions and gaps in the data
- Conversion impact is not reported. The study shows more verification steps and greater retailer-page visitation after AI exposure, but it does not quantify whether added steps increase or reduce final conversion or average order value.
- Category differences are unclear. High-consideration vs low-consideration categories may show different tolerance for AI-driven suggestions and verification depth. The public summary does not provide granular splits.
- “Fully trust” is a strict threshold. Many shoppers may partially trust AI for shortlisting while still verifying critical details. The 46% figure may undercount practical usefulness in low-risk purchases.
- Assistant variance and recency are not fully detailed. Different AI tools, update cadences, and link policies can alter mismatch risks. The report aggregates across tools and does not break out performance by assistant.
- U.S.-only sample limits generalization to other markets with different retailer landscapes and review ecosystems.
Sources
IAB and Talk Shoppe. When AI Guides the Shopping Journey. 2025. See the full report.






