Chrome Skills Moves AI Visibility Into the Browser
On April 14, Google launched Skills in Chrome, a feature inside Gemini in Chrome that lets users save their most-used AI prompts and re-run them with one click. The rollout begins on Mac, Windows, and ChromeOS for Chrome users set to English-US, with saved Skills syncing across signed-in desktops. Hafsah Ismail, a Product Manager on Chrome, announced the feature on Google’s Keyword blog.
The news framing centers on productivity. Skills lets users save a prompt like “scan this for ingredient substitutions to make the recipe vegan” and re-run it across any recipe page without retyping. For anyone working on link building, digital PR, or AI visibility, the productivity framing misses the more consequential story. A new surface just opened where AI decides which pages get read, and the decision happens inside the browser after the user has already clicked through.
What Skills does, mechanically
A Skill has three parts: a saved prompt, a trigger, and an execution scope.
The saved prompt holds an instruction a user has already written in Gemini and decided to reuse. Skills can be saved from chat history with one click, and Gemini prompts users to save frequently-used prompts automatically. The trigger comes as a forward slash or plus sign inside the Gemini side panel, which opens a menu of saved Skills. The execution scope covers the active browser tab plus any additional tabs the user selects with the plus button.
When a Skill runs, Gemini reads the content of the selected tabs, applies the prompt against that content, and returns a synthesized output inside the Gemini panel. The user does not need to scroll any of the underlying pages. The output can take the shape of a summary, a comparison, a rewritten version, a filtered extraction, or whatever the prompt instructs. For actions with consequences outside the browser like calendar events or email sends, Skills asks for confirmation, benefits from Chrome’s layered protections, and inherits the same safeguards Gemini in Chrome already applies to standard prompts.
Skills replace the 2025 Extensions architecture, which was limited to Google’s own properties like Gmail and Drive. Skills run on any website, which means a product comparison Skill works on independent e-commerce sites, a PDF summarization Skill works on any PDF opened in the browser, and a recipe transformation Skill works on independent food blogs. The universality matters for anyone producing web content, because any page a Skill can reach becomes content the AI layer can consume.
Google’s Skills Library at chrome://skills/browse ships with prebuilt Skills across Learning, Research, Shopping, Writing, and Health & Wellness. Users can save any of these with one click, customize the underlying prompt, or build their own from scratch. The library functions as editorial infrastructure: Google is telling users what to automate first.
Underneath the user-facing mechanics, Skills integrates with Agent Mode in Gemini 3.1 Pro, which means a Skill can be called autonomously by an agent completing a multi-step goal. A user asking Gemini to “plan a weekend trip” might never click “run comparison Skill” directly; Agent Mode selects and runs Skills based on the broader goal. Content consumed by an autonomous agent never reaches the user’s eyes directly.
The attention economy inside the browser
Traditional SEO works on a pipeline: user issues query, search engine returns pages, user clicks, user reads, user converts. AI Overviews already compressed that pipeline by answering queries before the click. Skills compresses what happens after the click.
When a user runs a comparison Skill across five product tabs, each of those five pages gets fetched and parsed. The analytics system counts five page views, each of which contributed to the output the user ended up acting on, but the user read none of them, scrolled past no CTAs, saw no related content modules, and clicked no internal links. The pages did real work and got zero credit for the work they did.
The pattern has been building since AI Overviews rolled out last year. Impression-based measurement keeps registering activity, engagement-based measurement keeps showing it moving, and conversion-based measurement keeps producing flat results from pages that used to convert. The explanation comes down to the AI layer sitting between the page and the user on both sides of the click. Users interact with the layer, not the page, and metrics calibrated to page-level interaction register the absence without explaining it.
How the AI layer picks which source to trust
When a Skill runs across multiple tabs, Gemini has to decide how to weight content from each tab. Google has not published ranking signals for cross-tab synthesis, but the observable behavior suggests several inputs at work.
Page authority, as measured by the signals Google Search already uses, remains one input. A Skill running across three product pages from different merchants weights authoritative publishers differently from random blog posts. The quality signals that determine SERP placement influence which content the AI leans on when pages get synthesized.
Entity recognition matters independently. Gemini’s knowledge graph knows which brands, products, and authors are real entities. Content from recognized entities carries more weight than content from unrecognized ones. A brand that is not a known entity to Gemini starts from a disadvantage regardless of how well the page is written.
Recency matters for queries where up-to-date information determines the answer. A recently-updated product page with current specs beats an outdated one, and fresh editorial coverage beats coverage from three years ago when the topic has moved on.
Structured data gets read by the AI layer the same way it gets read by search crawlers. Product schema identifies specifications cleanly, Recipe schema identifies ingredients, FAQ schema identifies question-answer pairs. A page with well-implemented schema is easier to extract from than a page where all information sits inside unstructured prose.
External validation comes into play when the AI has to choose between competing claims. A brand cited in authoritative publications, backed by reviews from credible sources, and linked to by industry media carries more weight than one without those signals. Gemini was trained on the open web, and the publications that signal authority to a search engine signal authority to a language model for the same underlying reasons.
What the Skills Library tells us about user intent
The prebuilt Skills in the library point at specific high-intent user task categories, and each category maps to a content strategy question.
Learning Skills automate concept explanation, which rewards educational content structured cleanly enough for a model to extract a correct explanation rather than a confused one. Research Skills handle source comparison and fact-checking, favoring pages that cite primary sources and structure claims with explicit attribution. In Shopping, where users compare specs across multiple tabs, structured product data outperforms prose marketing copy. Writing Skills pull from source material to generate drafts, which means content written in a brand’s authentic voice has a narrow window to get quoted directly before the user receives a generated version. Health & Wellness Skills extract nutritional and medical information, a category where credibility signals and authoritative publication matter more than clever copy.
The Library also works as a product signal. Google is telling users, through the defaults it ships, that these categories are where AI automation will concentrate first. Content teams working in any of these categories should assume their pages will be accessed through Skills before they are accessed through traditional organic search within two or three product cycles.
Why link building and digital PR matter more, not less
A common reading of the AI-layer transition concludes that SEO is dead, links do not matter, and brands should give up on traditional tactics. The reading misses how language models actually decide which sources to trust.
Gemini, like every other major LLM, was trained on the open web and continues to rely on web data during inference through retrieval-augmented generation. The signals that determine what the web says about a brand (backlinks, mentions in reputable publications, editorial coverage) become the signals that determine what AI answers say about that brand. Every authoritative mention of a brand in a trusted publication adds weight to that brand’s entity recognition score in the underlying knowledge base.
Link building, in the narrow sense of acquiring followed links on authoritative domains, still produces the same search visibility benefits it always has. It now also produces a second-order benefit: seeding the training and grounding data that AI answers draw from. Placements on indexable domains with strong editorial standards contribute to the pool of citations that Gemini, ChatGPT, Perplexity, and Claude all lean on when asked to assemble an answer.
Digital PR does similar work at a different frequency. Earning coverage in a tier-one publication produces a citation that gets indexed, crawled, included in training updates, and retrieved by grounding systems during live queries. A single mention in the Wall Street Journal, TechCrunch, or a relevant industry trade publication has multi-year compounding value now in a way it did not when search was one product. The compounding happens because the citation gets reused across every AI layer that touches related queries, often for years after publication.
Guest posting on reputable domains does a third thing: it seeds specific claims and framings into publications that models treat as source material. The content of a guest post becomes extractable material, not just a backlink. When a model summarizes a topic, the framings present in authoritative source pages influence the summary directly. Brands producing guest content on credible publications shape how AI systems describe their category, not just how AI systems rank their domain.
Link insertions into existing authoritative content attach a brand to pages that have already earned trust, rather than waiting for new content to earn it. In an environment where AI layers weight established pages more heavily than fresh ones, inserting relevant brand references into pages that already rank and get cited compresses the time required to build visibility.
Content structure for pages that get parsed
Even with strong external signals, the content on the page itself determines what a Skill extracts. A page cited by every major publication will still lose to a page with cleaner structure if the Skill is extracting specific facts rather than evaluating general authority.
Entity consistency across every page a brand owns does more work than it used to. An AI layer assembling an answer about a company needs to match information on the page to a known entity, and inconsistent naming conventions, varying author attributions, or missing structured data leaves room for misattribution. A page referring to the brand as “Acme Inc” in one place, “ACME” in another, and “Acme Corporation” in a third looks like three different entities to a model reading programmatically.
Claims placed near the top of a section with supporting detail below get extracted more cleanly than claims buried in paragraph three. The extraction behavior favors pages that follow journalistic inverted-pyramid structure: key fact first, elaboration after. Pages written with marketing-style build-up (background, context, setup, reveal) get summarized rather than quoted, because the model has to make a guess about which element was the key point.
Structured comparisons using tables get parsed as comparisons. The same information in prose gets summarized into a paragraph rather than presented as the side-by-side the user asked for. Product pages that use clean specification tables beat product pages describing features in marketing copy when a Shopping Skill is running.
Schema markup (Product, Recipe, Article, FAQ, HowTo, Review) does machine-readable work that prose cannot. A Shopping Skill extracting features from a product page with Product schema gets exact values. The same Skill on a page without Product schema has to parse the HTML and make best guesses, which means more information loss between the page and the output.
Internal linking with consistent anchor text signals topical authority to crawlers and, by extension, to the knowledge structures models build from web data. Generic anchor text like “learn more,” “click here,” or “this page” wastes that signal, while anchor text aligned with the target page’s topic reinforces the association between URL and topic in the model’s internal representation of the site.
What to measure when pages are inputs, not destinations
Traditional page analytics degrade in an AI-layer world. Time on page shortens because users spend their time in the Gemini panel; bounce rate rises because users open tabs, run Skills, and close tabs without interacting with any on-page element; conversion rate flattens because users act on synthesized output rather than on the page CTA. The on-page metrics keep working the way they always did, while the on-page behavior the metrics are calibrated to has moved elsewhere.
New measurement approaches track different signals. Brand-mention monitoring across AI answer engines (Perplexity, ChatGPT, Gemini, Claude) reveals whether a brand gets surfaced in generative responses. Citation tracking through checking which sources get linked from AI answer pages reveals which content assets earn their way into grounding data. Entity presence checking, which involves testing whether a brand returns correct information when queried directly in an LLM, reveals whether the brand has achieved entity status in the underlying model.
Traffic quality assessment now has to account for the portion of page views coming from AI layers fetching content on behalf of users. Bot detection systems may or may not classify these as bots, and the definitions are still unsettled. A high bounce rate from an AI referer may mean the page performed its function correctly inside an AI workflow, rather than the user disliking the page. The measurement stack needs new categories for traffic that is neither clearly human nor clearly automated in the traditional sense.
The direction of travel
Chrome still routes a large share of web traffic, and every new AI feature inside it moves reading further away from the page. Atlas from OpenAI, Comet from Perplexity, Dia from The Browser Company, and the other AI-native browsers will each add their own version of cross-tab execution, and they will converge on similar user behaviors because the underlying product logic is the same: users want AI to handle the reading, and the browser is the place where that happens.
Content strategy built around the assumption that users scroll, read, and click is building for a shrinking share of traffic. Content strategy built around the assumption that AI layers will extract, synthesize, and cite is building for the share that is growing. The practical work is recognizing that the same pages often need to perform in both environments, and that the signals supporting performance in the AI layer (authority, structure, entity consistency, machine-readable data) are not in tension with the signals supporting performance for human readers. Clean structure, entity consistency, and authoritative coverage help both audiences equally.
Attention is moving from the page to the AI layer, and the movement does not read as a temporary product behavior Google might roll back. It matches the direction of every adjacent product Google has released in the past two years, and it matches the independent product decisions made by every browser competitor.
Link building and digital PR retain their value because they produce signals the AI layer reads the same way search engines read them, content structure retains its value because well-structured pages get parsed more cleanly, and entity consistency gains value as models need to know who a brand is before they cite it. The tactics that hold up are the ones that earn an authoritative place in the pool of content AI layers treat as trustworthy, and the pages that hold up are the ones that survive the scrutiny of both human readers and programmatic extraction.
Skills represents one implementation of that pattern. Others will follow, and the pattern will keep showing up in different products through the rest of the decade. The audience for your page now includes models, and the practical work is making sure the page serves both audiences without compromising either.
