AI Now Owns the Top of the Purchase Funnel
A debate that has been running since 2023 just got settled by data. Similarweb, a digital intelligence and web analytics platform, published its Market Research Panel results from a January survey of US consumers, measuring how people use AI tools versus search engines at each stage of the purchase journey. The results do not show AI supplementing search. They show AI replacing the top of the funnel that search never served particularly well.
At the product discovery stage, 35% of US consumers said they find AI tools most useful, compared to just 13.6% for search engines. At the research and comparison stage, AI leads 30% to 20%. At the narrowing and deciding stage, 31.4% to 15%. At the evaluation and confidence-building stage, 32.9% to 15%. The gap only closes at the final purchase step, finding where to buy and getting the best price, where AI leads 24.3% to 22.1%.
AI holds a 2:1 or greater advantage at every stage from discovery through evaluation. Search only approaches parity at the moment someone is ready to open their wallet.
The consumer journey no longer starts in a search bar
For more than two decades, SEO strategy has been built on the assumption that the consumer journey begins with a search query. Someone types a question or a product category into Google, the search engine returns a ranked list of pages, and the consumer clicks through to evaluate options. The entire infrastructure of keyword research, content strategy, and ranking optimization is calibrated to that starting point.
The Similarweb data shows that starting point has moved. More than a third of consumers now begin their discovery process inside an AI tool like ChatGPT, Gemini, or Perplexity, not inside a search engine. They describe what they need in natural language, receive a curated response with specific brand recommendations, and form an initial consideration set before any search query gets typed.
The search query still happens. But by the time it does, the shortlist is already set. A consumer who discovers three skincare brands through a ChatGPT conversation and then searches Google for reviews of those three brands will show up in analytics as a search-driven conversion. The AI conversation that shaped the shortlist receives no attribution. Last-click measurement gives search full credit for a decision that AI made possible.
A brand invisible in AI responses is invisible at discovery
The direct consequence of 35% discovery happening through AI is that a brand absent from AI responses is absent from more than a third of potential customers before any intent signal reaches search. No keyword strategy compensates for that absence, because the consumer has already built a shortlist without ever issuing a query the brand could rank for.
Similarweb’s data also shows that AI visibility does not follow the same patterns as search visibility. The report’s 2026 AI Brand Visibility Index, which measured brand mention share across ChatGPT, Gemini, Copilot, and Perplexity using January data, found that brand scale is no longer a reliable predictor of AI visibility. In Beauty, CeraVe, a dermatologist-recommended skincare brand, leads the AI visibility index despite Ulta, a major beauty retailer, having roughly ten times the branded search volume. In Finance, structured-content platforms like NerdWallet, a personal finance comparison site, and Bankrate, a financial product comparison publisher, appear in the top ten alongside institutional giants like Chase and Fidelity. In News, Reuters leads despite having just 1.5 million monthly branded searches, far below Fox News at 42.1 million, which ranks seventh.
The brands winning AI visibility are not always the ones with the most customers. They are the ones whose content answers questions completely, in a format AI can extract cleanly, backed by citations from third-party sources that models treat as consensus.
Referral traffic is flat because AI was never designed to route
Alongside the funnel data, Similarweb reported a structural pattern in AI referral traffic that reframes how AI visibility should be measured. AI platform visits grew 28.6% between January last year and January of this year (US, desktop and mobile combined). AI referrals to external websites over the same period stayed flat.
Kevin Indig, a growth advisor and SEO strategist, is quoted in the report explaining the dynamic: AI traffic contributes less than 1% compared to organic traffic, and Pew Research found less than 1% of users click links in AI Overviews. The platforms are retaining attention, not distributing it. Indig described ChatGPT as closer to TikTok than to Google in this regard.
The referral plateau is not a failure of AI platforms. It reflects the intended design. Generative AI is built to synthesize and answer, not to route users to external sites. When a consumer asks ChatGPT whether CeraVe or La Roche-Posay is better for sensitive skin, they get an answer. They may never need to click anything. The brand that appears in that answer has won something real, influence over a purchase decision, without ever receiving a referral visit. The brand that does not appear has lost that same decision before the consumer reached their website.
The practical implication is a measurement problem. Teams measuring AI performance by referral traffic volume are measuring the wrong output. The referral number will stay low because the architecture is designed to keep it low. The metric that matters is brand mention share: what percentage of relevant AI responses include the brand, and how that percentage compares to competitors in the same category.
Where the traffic does arrive, it converts better
One data point from the report complicates the “AI sends no traffic” narrative in a useful way. When AI does send referral traffic, that traffic performs measurably better than Google referral traffic. Users referred from ChatGPT spend an average of 15 minutes on site versus 8 minutes from Google referrals, generate 12 pageviews per visit versus 9, and convert to transactional sites at a 7% rate versus 5% from Google.
Volume is low, but quality is high. The explanation connects back to the funnel data: by the time someone clicks through from an AI response, they have already done the discovery, comparison, and narrowing inside the AI tool. The click represents a decision that has already been made, not a browsing session where the decision is still forming. AI referral traffic behaves more like branded search traffic than like organic discovery traffic, because the discovery already happened somewhere else.
For brands that do appear in AI responses, the referral traffic they receive punches above its weight. For brands that do not appear, the volume question is irrelevant because the traffic never arrives in the first place.
Last-click attribution is hiding AI’s real contribution
The funnel data exposes a specific attribution failure. A consumer discovers a brand through an AI response at the discovery stage (35% of consumers). They research and compare options through AI at the research stage (30%). They narrow their choices through AI at the evaluation stage (32.9%). Then they type the brand name into Google, visit the website through a branded search click, and convert. The conversion gets attributed to search.
In this scenario, AI performed all the upper-funnel work, and search received all the lower-funnel credit. The brand’s analytics dashboard shows a strong branded search channel and a negligible AI referral channel, which leads to the conclusion that AI does not drive business. The conclusion is backwards. AI drove the consideration that search later closed.
Similarweb frames this as the Visibility-over-Traffic principle. Influence within AI responses has become more valuable than click volume from AI responses, because the influence shapes decisions at the discovery and evaluation stages where the shortlist gets built. Measuring referral traffic captures the tail end of a journey whose beginning happened inside a conversation the analytics system never saw.
The content that earns AI mentions
The AI Brand Visibility Index data across six sectors (Finance, Travel, Beauty, Electronics, Fashion, News) reveals a consistent pattern in what content earns brand mentions in AI responses. Similarweb identifies three characteristics that distinguish high-visibility brands from those trailing in AI responses.
Structured, specific, and deep content performs better than broad category pages. A third of ChatGPT citations come from pages three folders deep in a site’s URL structure. A product detail page with a structured “About this item” section outperforms a generic category page. A 2,000-word comparison guide outperforms a 400-word overview. Depth and specificity beat breadth, which runs counter to the consolidation instinct most SEO strategies follow.
Third-party citation presence matters independently of on-site content quality. LLMs weigh consensus across sources. Bankrate in Finance, Travelmath in Travel, and WhoWhatWear in Fashion are all cited heavily in third-party editorial content, not just their own domains. A brand that appears only on its own site looks like a single source. A brand that appears across publishers, review platforms, and community forums looks like a consensus, and consensus is the signal LLMs are designed to surface.
That third-party presence is the operational output of link building and digital PR. Every placement on a credible publisher, every editorial mention earned through outreach, every guest post on a domain with editorial standards contributes to the citation pool that AI systems draw from during retrieval. The brands leading the AI Brand Visibility Index are not winning on budget or domain authority alone. They are winning because their names appear across enough trusted sources that the model treats them as consensus answers.
Ethan Smith, CEO of Graphite, a content and SEO agency, is quoted in the report reinforcing the same pattern. He notes that appearing at number one in Google for a head keyword is no longer sufficient, because winning AI citations requires appearing multiple times across trusted offsite sources so that AI models recognize the brand as a consensus answer. For long-tail keywords, he notes, the opportunity is even greater.
Accessibility determines whether AI can find the content at all
The News sector data in the report illustrates a fourth factor that sits underneath content quality and citation presence: whether AI systems can access the content in the first place.
Reuters leads the News AI visibility index with a perfect score of 100, despite having just 1.5 million monthly branded searches. The Guardian ranks second at 60, and AP News third at 57. The New York Times ranks eighth at 25, and the Wall Street Journal ranks ninth at 26. The pattern maps directly to access policies. Reuters, The Guardian, The Independent, and AP News all have open or partially open access and do not block AI crawlers. The Times and the Journal operate paywalls and restrict AI crawler access.
The report frames this as a strategic fork rather than a technical one. Blocking AI crawlers preserves short-term content control but surrenders long-term influence over how journalism shapes public knowledge and brand perception. The brands most exposed to paywall policies and AI crawler restrictions are losing AI visibility momentum fastest.
For any brand weighing whether to gate content, the News sector data provides a concrete case study. A page behind a paywall or blocked from AI crawlers cannot be retrieved, cited, or recommended. The content might be the best in its category, but if the model cannot reach it, the model cannot surface it. Ensuring that the most citable, authoritative pages on a domain remain accessible to AI crawlers is now a distribution decision with direct visibility consequences.
The funnel has a new top, and it runs on citations
The Similarweb data does not predict a future where AI replaces search. The funnel data shows the two coexisting, with AI dominating the upper stages and search retaining a role at the final purchase step. The more precise reading is that the consumer journey now has a new entry point, and that entry point runs on a different set of signals than the one SEO was built around.
Search rewards pages that match query intent, earn backlinks, and carry domain authority. AI rewards brands that appear as consensus answers across trusted sources, produce content structured for extraction, and maintain accessible pages that retrieval systems can reach. The overlap between those two signal sets is large but not complete. A strong SEO program builds many of the same assets AI visibility requires, but it does not automatically build all of them, and the gap is widest at the citation-presence layer.
Link insertions into already-indexed authoritative content accelerate the timeline for building that citation presence, attaching a brand to pages that AI systems already trust rather than waiting for new content to earn its way into the retrieval pool. In an environment where 35% of consumers are forming their brand shortlists inside AI conversations, the speed at which a brand enters the citation pool has a direct relationship to how quickly it becomes discoverable at the top of the funnel.
The purchase funnel did not lose its top. The top moved to a place where different signals determine who gets seen, and the data now exists to show exactly how far it moved.
