The AEO Practitioner’s Playbook for 2026: What 30 Million Citations Reveal About Getting Your Brand Into AI Answers

On April 8, BrightEdge unveiled data that should instantly grab every marketer’s attention. AI agent requests have reached 88% of human organic search volume, and by the end of this year, AI-driven queries are projected to surpass human-driven search entirely. Meanwhile, AI Overviews now appear on 48% of all Google search queries, up 58% from December 2025. Google’s AI Mode has crossed 75 million daily active users across more than 200 countries. ChatGPT is processing 2.5 billion prompts a day.

Here’s what’s quietly devastating, only 21% of companies have a strategy for AI search agents. The other 79% are still optimizing for a version of search that’s shrinking by the quarter. They’re pouring budget into ranking for blue links while the audience is increasingly getting answers from machines that never click.

I’ve spent the last several months buried in the data on this shift, from Rand Fishkin’s 2,961-run study on AI brand recommendations to Ahrefs’ analysis of 75,000 brands to Kevin Indig’s audit of 1.2 million ChatGPT responses. The picture that emerges is clear, specific, and, for most brands, urgently actionable. This isn’t a think-piece about where things are headed, it’s a practitioner’s guide to what’s working right now, grounded in data that’s mostly from the last 90 days.


The Great Decoupling: Why Your Google Rankings Don’t Predict AI Visibility

If you’ve been in SEO for any length of time, your instinct is to assume that ranking well in Google means you’ll show up in AI answers too. The data says otherwise, and the gap is wider than most people realize.

According to Ahrefs, 80% of URLs cited by large language models don’t rank in Google’s top 100 for the query that prompted the citation. Only 12% of the URLs cited by ChatGPT, Perplexity, and Microsoft Copilot rank in Google’s top 10. Profound’s analysis of 30 million citations found that only 12% of cited sources overlap across ChatGPT, Perplexity, and Google AI Overviews. Traditional SEO ranking factors explain just 4–7% of AI citation outcomes.

I call this the Great Decoupling because it describes exactly what’s happening. The engine that decides who gets cited in AI answers is running on different fuel than the engine that decides who ranks on page one of Google. If your entire visibility strategy is built around organic rankings, you’re optimizing for one game while the other one quietly eats your market share.

Data lines, one red, one green, showing AI search growth over organic search

The traffic data underscores this. Seer Interactive found that when an AI Overview appears on a search result, organic click-through rates drop 61%. Paid CTR drops 68%. And 93% of searches that enter Google’s AI Mode end with zero clicks to any external website. For publishers, the picture is grim: Chartbeat data shows Google search traffic to 2,500+ publisher sites decreased 33% globally between November 2024 and November 2025.

But here’s the part most people miss. AI traffic, while still a small fraction of total volume, converts at dramatically higher rates. Ahrefs reported that AI visitors made up 0.5% of their total traffic but drove 12.1% of signups. That’s a 23x conversion advantage. Semrush found AI visitors worth 4.4x traditional organic, and Adobe Analytics reported AI conversions running 31% higher during the 2025 holiday season. Getting fewer visitors who convert at 5x the rate isn’t a loss; it’s a channel shift you should be leaning into.


What Actually Drives AI Citations:
The Data-Backed Hierarchy

So if traditional SEO signals aren’t the primary driver, what is? This is where the research gets genuinely useful for practitioners. Multiple large-scale studies converge on a hierarchy that looks quite different from the ranking factors we’ve optimized around for two decades.‍

1. Domain Authority and Off-Site Presence Still Matter Most

SE Ranking’s study of 2.3 million pages found that domain authority is the single strongest predictor of AI citations, with a SHAP value of 0.63. Sites with 32,000+ referring domains are 3.5x more likely to be cited by ChatGPT, and sites with 350,000+ referring domains average 8.4 citations per query. Domain authority is still built through links, but the way it translates into AI visibility is less about individual page rankings and more about overall brand credibility signals.

2. YouTube Mentions Beat Backlinks

This finding surprised me. Ahrefs’ study of 75,000 brands found that YouTube mentions show the strongest correlation with AI visibility at approximately 0.737, outperforming branded web mentions (0.66–0.71) and backlinks (0.218) by 3.4x. It’s not that backlinks don’t matter; it’s that the signal for AI systems is shifting from "who links to your content" to "who’s talking about your brand." YouTube is the single biggest platform for that signal.‍ ‍

3. Reddit, Quora, and Review Platform Presence

SE Ranking found that domains with 10 million or more Reddit mentions average 7 citations versus 1.8 for those with minimal activity. Reddit ranks number one by citation count across all AI responses. Domains listed on Trustpilot, G2, Capterra, Sitejabber, and Yelp have 3x higher citation chances. The community-driven, user-generated web that Google has increasingly surfaced in traditional search is the same web that AI systems lean on hardest for brand validation.‍ ‍

4. Earned Media Distribution

Stacker’s research found that earned media stories see a 239% median increase in AI citations compared to brand-owned content. That’s consistent with a broader pattern: 85% of AI citations come from third-party pages, not brand-owned domains. Brands are 6.5x more likely to be cited through third-party sources than through their own sites. Your PR strategy just became a search strategy, whether your PR team realizes it or not.


Content Structure Patterns That Get Cited

Beyond the domain-level signals, the research is increasingly clear about what AI systems look for at the page and content level. Kevin Indig’s analysis of 1.2 million ChatGPT responses (published in his Growth Memo newsletter, January 2026) provides the most granular picture I’ve seen.

Lead with the direct answer. 44.2% of all LLM citations come from the first 30% of a page’s text. That means you need the direct, extractable answer in your first 40–60 words. Not a throat-clearing introduction, not a definition of terms. The answer. AI systems are scanning your content top-down and pulling from the front.

Use conversational Q&A structure. Cited content is 2x more likely to include a question mark, and 78.4% of question-tied citations came from headings. AI treats H2 tags as prompts and the following paragraph as the answer. Structure your content accordingly: the H2 is the question a user would type into ChatGPT, and the first paragraph beneath it is the answer you want cited.

Pack in statistics and named entities. The Princeton GEO study found that content with statistics improves AI visibility by up to 40%. Entity-rich text averages 20.6% proper nouns versus 5–8% in typical text. Replace vague qualifiers like "significantly more" with actual numbers. Replace "a major company" with the company’s actual name. The specificity is the signal.

Write declaratively. Declarative opening sentences ("X is Y") drive a 14% increase in citation rates according to Indig’s research. Pages with hedging language see lower rates. AI systems are looking for content that states things with confidence; if your content reads like it’s apologizing for having an opinion, it gets passed over.

Go deep. Pages with 10–15 H2 sections and 5,000–7,500 words correlate with higher citation rates. Pages above 20,000 characters get 4.3x more citations. This isn’t about word count for its own sake; it’s about comprehensive coverage. AI systems prefer sources that address a topic from multiple angles because those sources are more useful for synthesizing answers across a range of related queries.

Keep it fresh. ChatGPT cites URLs that are on average 393 days newer than the pages Google ranks for the same queries. 50% of Perplexity’s citations come from 2025 content alone. Companies with high AI visibility tend to update cornerstone content on a quarterly cadence. If your best content is more than 18 months old and hasn’t been refreshed, it’s aging out of AI answers regardless of how well it ranks on Google.


A Quick Word on What’s Not Working

Two tactics that keep getting recommended in AI search guides deserve scrutiny.

llms.txt is a non-factor. SE Ranking analyzed 300,000 domains and found no statistically significant correlation between llms.txt implementation and AI citations. Eight of nine test sites saw no measurable traffic change. Google’s Gary Illyes confirmed Google doesn’t support it. It costs almost nothing to implement, so there’s no harm in adding one, but don’t treat it as a lever. It’s wallpaper.

Keyword stuffing actually hurts. The Princeton GEO study found that keyword stuffing decreased AI visibility by 10%. Meanwhile, adding citations to your content boosted visibility by 30–40%, expert quotes lifted it 41%, and statistics raised it 30%. The dominant SEO playbook of the last 20 years literally inverts in AI search. Density doesn’t help; substance does.


The Rand Fishkin Wrinkle: AI Recommendations Are Volatile, But Frequency Is Real

Before you build a tracking dashboard and declare victory, you need to understand SparkToro’s January 2026 study. Rand Fishkin and his team had 600 volunteers run 12 prompts through ChatGPT, Claude, and Google AI Overviews a combined 2,961 times. They found less than a 1-in-100 chance that ChatGPT will recommend the same list of brands twice for the same prompt. The chance of getting the same list in the same order was roughly 1 in 1,000.

AI tools recommended defunct businesses. They surfaced non-existent TikTok accounts. Fishkin’s conclusion was blunt: rank tracking in AI search isn’t a useful concept. There’s no stable "position" to track.

But the study’s second finding matters just as much. Frequency of appearance across many runs IS statistically meaningful. The brands that show up most often across hundreds of prompts represent a real, measurable signal. You can’t track your AI "rank" because there isn’t one. But you can track how often your brand appears relative to competitors, and that metric is both reliable and actionable.

This is the nuance that separates useful AI visibility strategy from snake oil. Anyone selling you a single "AI ranking" number is misunderstanding the medium. Anyone measuring share-of-voice across statistically significant prompt sets is doing something real. The tools that get this right include Ahrefs Brand Radar (which tracks brand visibility across six AI platforms), Otterly.AI (monitoring AI Share of Voice for over 10,000 users), and Semrush’s AI Visibility Toolkit. For a free starting point, HubSpot’s AI Search Grader will give you a baseline score across ChatGPT, Perplexity, and Gemini in about 30 seconds.


Real Results: Who’s Making This Work

The theory is only useful if it translates into outcomes. Here’s what the case studies show.

Go Fish Digital applied a GEO-focused strategy over three months using prompt mapping, fact-dense cornerstone pages, and query fan-out expansion. They saw a 43% increase in monthly AI-driven traffic from ChatGPT, an 83% lift in monthly conversions from AI referrals, and a 25x higher conversion rate from AI-driven leads versus traditional search.

A K-12 edtech company called CodingName worked with GenOptima to deploy structured comparison tables with ItemList schema, knowledge graph injection, and mobile-optimized "answer capsules." Over five months, they went from $24,000 to $280,000 in monthly revenue, a 1,041% increase driven almost entirely by AI search visibility.

And it’s not just smaller companies. Walmart saw ChatGPT account for 20% of total referral traffic between June and August 2025. NerdWallet’s revenue rose 35% despite a 20% decline in traditional traffic, because they’d diversified into AI-visible content early. Ahrefs’ own data showed that while AI search represents just 0.5% of their visits, it drives 12.1% of their signups.

The flip side is just as instructive. Chegg saw a 49% decline in non-subscriber traffic and filed an antitrust lawsuit against Google. Business Insider lost 55% of its organic traffic and cut 21% of staff. Forbes dropped 50% year-over-year. These weren’t companies that failed at SEO; they were companies that succeeded at SEO in a market that was shifting beneath them.


The Practitioner’s To-Do List

‍If I were sitting across from you at a coffee shop and you asked me what to do about all of this next Monday morning, here’s what I’d say.

First, get a baseline. Run your brand through HubSpot’s AI Search Grader and set up tracking in at least one dedicated tool (Otterly, Ahrefs Brand Radar, or Semrush’s AI Visibility Toolkit). Measure your share of AI voice against your top three competitors. You can’t improve what you can’t see, and most companies have never even looked.

Second, audit your content for extractability. Take your top 10 performing pages and ask yourself whether each one leads with a direct, clear answer in the first 60 words. Check whether your H2s are phrased as questions people would actually type into ChatGPT. Look for statistics, named entities, and declarative language. If your content reads like it was written to rank rather than to answer, it needs a rewrite.

Third, prioritize earned media. Given that 85% of AI citations come from third-party sources, your PR and thought leadership efforts are now directly tied to search visibility in a way they never were before. Getting mentioned in Forbes, on relevant YouTube channels, and in community platforms like Reddit isn’t just brand awareness anymore. It’s how AI systems decide you’re worth citing.

Fourth, update your cornerstone content quarterly. AI systems favor fresh content over stale content, even if the stale version ranks well on Google. Build a cadence for refreshing your most important pages with new data, new examples, and updated publication dates. This is the easiest lever most companies aren’t pulling.

Fifth, check Bing Webmaster Tools. Microsoft’s new AI Performance dashboard (launched February 2026) is the first free tool that shows how your content is being cited in AI-generated answers across Copilot and Bing AI. It includes citation counts, page-level performance, and grounding queries. Google has no equivalent. Use it.

Sixth, stop thinking about this as one channel. Brand visibility can differ by 615x across AI platforms. Only 11% of domains are cited by both ChatGPT and Perplexity. There is no monolithic "AI search" to optimize for, just as there’s no monolithic "social media." You need platform-specific awareness, even if your core strategy stays consistent across all of them.


The Shift Is Already Here

I keep hearing people frame this as something that’s "coming." It’s not coming; it’s here. AI agents are processing queries at 88% of human search volume today. ChatGPT’s 900 million weekly users aren’t waiting for the industry to figure out its terminology (AEO, GEO, LLMO, take your pick). They’re asking questions and acting on answers, and the brands that show up in those answers are capturing attention, trust, and revenue that the invisible brands never will.

‍The good news is that the fundamentals here aren’t alien. Build genuine authority. Create content that actually answers questions. Make your brand something people talk about, link to, and mention across the web. Get specific, stay current, and write for clarity over cleverness. As Lily Ray put it at Affiliate Summit earlier this year, most of what’s working in AEO and GEO is really just updated SEO best practices executed with more intentionality. The floor hasn’t changed. The ceiling has.

If you want to go deeper on the mechanics of how AI systems select, cite, and recommend brands, that’s the entire premise of my book Explainable: Why AI Recommends Some Brands & Ignores Others. It covers the full framework for understanding and influencing how AI answer engines decide what to surface. This post is the practitioner’s playbook for today; the book is the operating manual for the next several years.


About the Author

Jarred Smith is the author of Explainable: Why AI Recommends Some Brands & Ignores Others, an Amazon bestseller on AEO, GEO, and SEO. He’s a marketing leader with nearly 20 years of experience across healthcare, public media, retail, and environmental services. Find him at jarredsmith.com.

‍ ‍

Next
Next

The Click That Didn’t Happen: Why Brand Visibility in AI Search Is the Only Metric That Matters in 2026