FAQ Schema May Matter More for AI Than for Search
Google’s May 7, 2026 announcement that FAQ rich results are no longer appearing in Search was widely covered as the end of a SERP feature. That framing misses what the documentation actually says. Google removed the visible rich result. It explicitly committed to continuing to use FAQ structured data to better understand pages. The visible payoff is gone. The underlying function is not.
The interesting question for anyone running an AI visibility program is whether the FAQ format has more value for AI retrieval today than it ever had for the rich result that just got retired. The answer appears to be yes, and the reasoning has nothing to do with Google specifically. It has to do with how language models retrieve and cite content across ChatGPT, Gemini, Perplexity, and every other AI search surface.
Google removed the feature but kept the function
The deprecation notice draws a clean line between two things that often get conflated. Schema markup tells a search engine what a page is about in machine-readable form. Rich results are a display feature that uses some of that data to render visual SERP elements. Removing the visual feature is a product decision. Continuing to use the data is a technology decision.
For FAQ specifically, the schema describes a page as containing question and answer pairs, with each question explicitly paired with its corresponding answer in a structure a machine can parse without ambiguity. That structure remains useful for any system trying to understand the page, including the systems that decide which content to retrieve and cite in generative responses.
Google made the distinction explicit. Other AI platforms have not commented directly on FAQ schema, but the way their retrieval systems work suggests they value Q&A content for reasons that have little to do with whether Google displays a rich result.
How AI systems decompose user questions
The 1.4 million ChatGPT prompt study from Ahrefs that we covered earlier this year revealed something that changes how to think about content structure for AI visibility. When a user asks ChatGPT a question, the model does not search the web for that exact query. It generates a set of narrower sub-questions internally (sometimes called fanout queries) and searches for pages relevant to each one separately.
A user asking “what is the best CRM for small businesses” might trigger internal sub-questions like “CRM pricing comparison for small teams,” “CRM features for sales pipeline management,” and “CRM integrations with accounting software.” ChatGPT retrieves pages for each sub-question independently and assembles the final answer from the combined results.
Cited pages in the Ahrefs study scored 0.656 on title-to-fanout-query similarity using cosine similarity, while non-cited pages scored 0.484. The gap was significant enough that title alignment with sub-questions emerged as one of the strongest predictors of whether a page got cited in a ChatGPT response.
This decomposition pattern is not specific to ChatGPT. Gemini, Perplexity, and other AI retrieval systems all break user prompts into narrower internal queries before searching, with implementations that vary in detail but follow the same underlying logic. A user prompt rarely matches a page title directly, so the system breaks the prompt into more granular questions that are more likely to align with how content actually gets written.
Q&A structure as a map of AI retrieval queries
A page structured as explicit question and answer pairs is, by design, a list of narrowly scoped questions with corresponding answers. That structure maps almost directly to the fanout queries an AI retrieval system generates from a broader user prompt.
A prose article about CRM software might cover pricing, features, and integrations across paragraphs that flow into each other without explicit question markers. A page with FAQ markup covers the same topics but presents them as discrete questions: “How much does CRM software cost for a 10-person team?” “What CRM features support sales pipeline management?” “Does this CRM integrate with QuickBooks?” Each question is paired with a direct answer that a retrieval system can extract cleanly.
The first version is harder for an AI to match against a fanout query. The retrieval system has to infer where the answer to a specific sub-question lives within the prose. The second version is easier. The questions are explicit, the answers are bounded, and the alignment between sub-question and content is direct.
The implication is that FAQ-structured pages have a structural advantage in AI citation, separate from any direct benefit FAQ schema provides as a retrieval signal. Even setting the schema markup aside, content organized as explicit Q&A pairs maps more cleanly to how AI retrieval systems search for information.
FAQ schema as a comprehension signal for AI
The schema markup itself adds a second layer of value on top of the structural advantage. When a page includes FAQPage schema with properly marked Question and Answer entities, the markup tells any system parsing the page exactly which strings represent questions and which represent answers. There is no inference required. The structure is explicit, the entities are typed, and the relationships between them are unambiguous.
Whether AI retrieval systems use Schema.org markup directly or simply benefit from the cleaner content structure that schema usage tends to correlate with is a question that lacks public confirmation from OpenAI, Google’s Gemini team, or any other major AI platform. What is clear is that schema markup signals an authoring decision: someone deliberately structured this content as Q&A pairs, which usually means the content actually works as Q&A pairs rather than being repurposed prose.
Google’s own statement that it will continue to use FAQ data to better understand pages includes Gemini and AI Overviews by extension, since both rely on Google’s content understanding layer. Even if competing AI systems do not parse Schema.org directly, the cleaner content structure that schema usage tends to indicate likely makes the page easier for any retrieval system to extract answers from.
Genuine Q&A content versus FAQ markup decoration
The argument for FAQ schema as an AI visibility signal only holds if the underlying content actually works as Q&A. Google’s content guidelines for FAQ markup, which remain in place even after the rich result deprecation, require that the questions and answers appear as visible content on the page, that the questions are written by the site rather than user-submitted, and that the answers are not promotional or repetitive.
These guidelines existed to prevent abuse of the rich result feature. They now serve a different purpose. A page with FAQ schema that follows the guidelines presents genuine Q&A content that AI retrieval systems can extract from. A page with FAQ schema that violates the guidelines (artificial questions, padded answers, content added solely for SERP real estate) does not provide that benefit, because the underlying Q&A content does not actually answer any real user question.
The sites that benefited most from the 2023 rich result restriction were the ones whose FAQ content was actually useful. The same logic applies now. Sites with real Q&A content that aligns with questions users actually ask have content that serves AI retrieval well, with or without the schema markup. Sites with artificial FAQ sections do not benefit from the structure, because the structure does not contain answers that retrieval systems would want to surface.
Sites that should expand FAQ content, not abandon it
For sites currently using FAQ markup, the May 2026 announcement is not a signal to strip the schema. It is a signal to reconsider whether the FAQ content itself is doing useful work. Sites with genuine, well-organized Q&A content should keep the markup and consider expanding FAQ sections to cover more of the specific questions users actually ask about their product, service, or topic.
For sites without FAQ content, the announcement is also not a reason to avoid creating it. The rich result that originally motivated many FAQ pages is gone. The AI retrieval benefit, the content comprehension benefit, and the user experience benefit of having common questions answered clearly all remain. Adding a well-structured FAQ section to a product page, a service page, or a category page now serves AI visibility and user experience, even if it no longer earns SERP dropdown real estate.
The brands building the strongest AI visibility positions tend to share a content pattern: clear questions in headers, direct answers in paragraphs, and topic coverage that aligns with what users actually want to know. FAQ markup is one specific implementation of that pattern, and one that Google has explicitly committed to continuing to use as a comprehension signal. Link building and digital PR build the authority signals that determine whether AI systems trust a page enough to cite it. Q&A content structure determines whether those same systems can extract clean answers from the page once they decide to retrieve it. Both layers feed the citation pipeline.
The May 2026 announcement marked the end of FAQ rich results. The same week, it confirmed that FAQ structured data remains a useful signal for content comprehension. Sites that treat the two facts as one and remove their FAQ markup are making a SERP-feature decision in an environment where the SERP feature was never the most valuable thing the markup was doing.
