SEO, Industry News

ChatGPT Stopped Showing Query Fan-Out Data Since GPT-5.3. Here’s What Still Works.

Blog Image

Rasit

Mar 16, 20267 min read

The method that SEOs and GEO practitioners have been using to extract fan-out queries from ChatGPT conversations no longer works since the rollout of GPT-5.3. The query data that was previously visible in the conversation JSON through Chrome DevTools is gone, and the tools and browser extensions that relied on scraping that data from the web interface have stopped returning results.

For anyone tracking how ChatGPT searches the web on behalf of users, or trying to understand which sub-queries drive citations and source selection, the change is significant. Fan-out queries are the backbone of how ChatGPT retrieves information, and losing visibility into them affects both manual research and the GEO tools built around that data.

The good news is that alternatives exist. They’re less convenient, less precise, and come with their own limitations, but the data isn’t completely gone.

What Query Fan-Out Is and Why It’s Worth Tracking

When ChatGPT receives a prompt that benefits from web information, it doesn’t just search for the raw user question. It decomposes the prompt into multiple sub-queries, searches the web for each one, collects the results, and synthesizes a response from the combined findings. Google coined the term “query fan-out” for this same technique in AI Overviews, and ChatGPT uses an equivalent process.

The Writesonic study covered in a recent NO-BS post showed that GPT-5.4 generates an average of 8.5 fan-out queries per prompt, using domain restrictions and site: operators to target specific brand websites and validation platforms. GPT-5.3 sends roughly one query per prompt. The fan-out architecture is what determines which sources get pulled, which brands get cited, and what information ends up in the response.

Being able to see those sub-queries has been valuable for SEO and content strategy. The queries reveal what ChatGPT actually searches for (as opposed to what the user typed), which domains and page types the model targets, what modifiers it adds (years, “best,” “pricing,” “vs”), and how it clusters information into categories. Losing visibility into that data removes one of the few windows into how AI search actually works behind the interface.

What Changed with GPT-5.3

Previously, extracting fan-out queries from ChatGPT was straightforward. Open Chrome DevTools, navigate to the Network tab, filter for the conversation endpoint, and look for the search_query or search_model_queries field in the JSON response. Several Chrome extensions and bookmarklets automated the process, including tools from The SEO Pub, Quolity, and Keywords Everywhere.

Since GPT-5.3 became the default model, the query data no longer appears in the conversation JSON visible through the web interface. The ChatGPT Conversation Analyzer, a widely used tool for extracting this data, now shows an empty “Queries” column for GPT-5.3 conversations. On GPT-5.2, the data is still there. On 5.3, the column is blank.

SEO Südwest confirmed the change and noted that the ChatGPT Chromium Inspector output now lacks the query fields entirely for GPT-5.3 sessions. The data hasn’t been removed from OpenAI’s systems, but it’s no longer exposed through the web interface’s conversation payload.

The API Alternative

Fan-out query data is still accessible through OpenAI’s API. Chris Long published a Python script that queries the API directly using the Responses endpoint and extracts the fan-out data from the response. Jérôme Salomon independently confirmed the same approach works.

The script uses the OpenAI Python client, sends a prompt to the GPT-5.4 model with web search tools enabled, and parses the response for search queries, cited sources, and UTM data. The output shows every sub-query the model generated, the domains it targeted, and the citations it included.

There’s a meaningful limitation to this approach, though. The responses generated through the API don’t necessarily match the ones generated through the ChatGPT web interface. Different system prompts apply in each environment, and those system prompts influence how the model searches, what it prioritizes, and how it structures its answers. The API method gives access to fan-out data, but the fan-out queries for the same prompt may differ between the API and the web interface.

For research purposes and general pattern analysis, the API approach is still useful. For precise tracking of what specific ChatGPT web users see when they ask a given question, the API data is only an approximation.

CDN-Level Tracking as a Second Option

Every time ChatGPT cites a website, it generates a ping to that site. If a brand has CDN-level logging enabled, these citation events can be captured as they happen. The approach doesn’t reveal the exact fan-out queries that ChatGPT used, but it does show which pages on a site are being cited, when the citations happen, and (through the utm_source=chatgpt.com parameter) that the traffic originated from ChatGPT.

For brands focused on tracking their own citation visibility rather than reverse-engineering the full fan-out process, CDN tracking provides usable signal. It answers “are my pages getting cited by ChatGPT” even if it can’t answer “what queries led to those citations.”

The Impact on GEO Tools

The change creates a problem for the growing category of Generative Engine Optimization tools that built their tracking around scraping fan-out data from ChatGPT’s web interface. Any tool that extracted query data from conversation payloads, whether through browser extensions, headless browser scripts, or DevTools automation, is now working with incomplete data for GPT-5.3 and newer models.

SEO Südwest’s assessment is direct: many GEO tools, particularly those assembled quickly through rapid development, will need to adapt their tracking methods. And even after switching to API-based data collection, the gap between what the API returns and what users actually see in the ChatGPT web interface remains a fundamental accuracy issue.

Tools that already use the OpenAI API for fan-out extraction are less affected, but they still face the system prompt discrepancy. The API environment and the web interface environment produce different outputs for the same prompts because they operate under different system instructions.

What Still Works for Understanding ChatGPT’s Search Behavior

Despite the reduced visibility, several approaches still provide useful data about how ChatGPT searches and cites.

The OpenAI API with web search tools enabled returns fan-out queries, web results, and citations. The data may not perfectly match the web interface, but the patterns (which types of queries trigger which types of sub-queries, which domains get targeted, which page types get cited) are still informative for content strategy.

CDN-level citation tracking shows which pages are being cited by ChatGPT in real time, even without visibility into the queries that triggered those citations.

The utm_source=chatgpt.com parameter on citation URLs allows GA4 tracking of ChatGPT referral traffic, broken down by landing page, which shows which content is earning clicks from ChatGPT citations.

Manual prompt testing across both GPT-5.3 and GPT-5.4 still reveals citation patterns, source preferences, and the types of content each model favors, even without access to the underlying fan-out queries. Running the same prompt across both models and comparing the cited sources (as Writesonic did in their 50-prompt study) provides strategic insight regardless of whether the sub-queries are visible.

What to Take Away from the Change

The fan-out visibility loss is a reminder that building strategy around the internals of a third-party platform carries inherent risk. OpenAI didn’t announce the change or explain the reasoning. The data simply stopped appearing. Any workflow or tool that depended on scraping conversation payloads from the web interface broke without warning.

For link-building and digital PR strategy, the practical implications are more about monitoring than about the underlying approach. The content principles that drive AI citations haven’t changed: clear product information, transparent pricing, authoritative third-party coverage, and strong review platform profiles still determine whether a brand gets cited. What’s changed is the ability to see exactly which sub-queries led to a specific citation, which makes attribution harder but doesn’t change the playbook.

The API route remains open for now. Whether OpenAI keeps fan-out data accessible through the API long-term is an open question. For anyone building workflows around this data, the lesson from GPT-5.3 is to avoid single-source dependencies and build tracking that works across multiple signals rather than relying entirely on one extraction method.