AI SEO in 2026: What's Actually Working vs What's Hype
AI SEO is real and it matters — but most advice about it is wrong. Here's what's actually moving the needle across my sites in 2026: the signals that work, the tactics that don't, and what I actually do.
AI SEO is real, it matters, and most of the advice about it is wrong.
The confusion comes from conflating two separate problems: ranking in traditional search (Google, Bing) and being cited by AI systems (ChatGPT, Perplexity, Google's AI Overviews). These are related but not the same, and treating them as identical leads to bad decisions.
Here's what's actually moving the needle across my sites in 2026, and what's noise.
The Two-Track Reality#
Traditional search still drives the majority of organic traffic for most sites. Google hasn't been replaced — but the nature of clicks has shifted. Broad informational queries ("what is X", "how does Y work") increasingly get answered directly by AI Overviews. Users get the answer without clicking through.
Specific, technical, and transactional queries still drive clicks. "How to set up Supabase RLS for a multi-tenant app" is not something an AI will answer completely — it'll summarize and link out. That traffic still lands.
The practical implication: the content worth investing in is specific and technical, not broad and introductory. Writing "what is Next.js" in 2026 is almost certainly a waste of time. Writing "why your Next.js middleware is running twice in development" targets a specific problem that real users search when they're stuck, and that AI systems will cite when they can't fully resolve it themselves.
What's Actually Working#
Answer-shaped content. AI systems prefer content that directly and cleanly answers a question. This isn't new advice dressed up as AI SEO — it's just good writing. What has changed: the bar is higher. A paragraph of hedging before the actual answer used to be fine. Now, if your H2 asks a question and the paragraph below doesn't answer it immediately, an AI will skip your content and cite whoever does.
Practically: lead with the answer, then explain. If your section header is "How do I handle tag invalidation in Next.js?", the first sentence under it should answer that directly, not set up context for three more paragraphs.
Technical content with real code. LLMs cite technical posts with working code examples at a significantly higher rate than posts without. This matches what I've observed in my own analytics — blog posts with substantial code blocks attract inbound links and AI citations at a higher rate than prose-only posts on similar topics. The reason is simple: code is verifiable and specific, which signals expertise.
Long-tail, problem-specific queries. "SEO" is getting harder. "How to generate hreflang tags for a Next.js App Router site with 13 languages" is a query an AI can't fully resolve from its training data alone. If you have a post that answers it precisely, you'll get the traffic. The long tail is expanding, not shrinking.
Multilingual content. This is where I see the most underutilized opportunity. The vast majority of web content is in English. AI-driven search in Korean, Japanese, Spanish, or Vietnamese has far less competition. A technically sound post translated and adapted for non-English speakers can outperform its English equivalent purely because the competition is thin. This isn't a hack — it requires real localization, not machine-translated garbage — but the return is real.
Structured data and clean HTML. Schema markup still works. AI crawlers read JSON-LD. If you have a how-to post, mark it up as HowTo. If you have FAQ sections (real ones, not manufactured ones), mark them up. This isn't magic, but it lowers the friction for AI systems to extract and cite your content.
Brand mentions across sources. LLMs build a model of who is authoritative on a topic based on where a name appears. If your name or site is referenced across GitHub, Hacker News, other blogs, and forums — not just your own content — that signal accumulates. You can't manufacture this directly, but publishing work worth referencing (open source projects, genuinely useful posts) creates the conditions for it.
What's Hype or Actively Misleading#
"Optimize for AI Overviews" as a distinct strategy. There is no separate optimization path for Google's AI Overviews. Google pulls from the same index, evaluates the same quality signals, and ranks the same content. The advice to "structure content for AIO" is just restating "write clearly and answer questions directly" in trendier language.
AI content at scale without a unique angle. Publishing 500 AI-generated posts on generic topics is producing noise, not signal. AI systems are trained to recognize thin content, and so are the humans who might otherwise link to it. Volume without perspective is not an SEO strategy in 2026 — it's a way to get ignored by both algorithms and readers.
E-E-A-T as a lever you can pull. Experience, Expertise, Authoritativeness, and Trust are real ranking factors, but they're outcomes, not inputs. You can't add an author bio and "demonstrate experience" — you demonstrate experience by writing things only someone with experience would write. The signal is in the content, not the metadata around it.
AI SEO tools that charge for basic advice. There's a category of tool that packages standard on-page SEO checks, calls them "AI SEO audits," and charges accordingly. The underlying advice (fix your title tags, improve your page speed, get more backlinks) is the same as it's always been. The framing is new. The value is not.
What I Actually Do#
Across seven sites, my approach hasn't changed dramatically from solid traditional SEO. What has changed is the emphasis:
I write about specific problems from real production experience. Not "how to use Supabase" — "how I handle data isolation across seven tenants in a single Supabase project." Specific, experiential, verifiable.
I maintain a llms.txt file that updates automatically as I publish new content, so AI crawlers always have a fresh sitemap of what's on the site. (I wrote about this separately — the static version becomes stale immediately, which defeats the purpose.)
I publish in multiple languages. The multilingual infrastructure is already built; publishing translated versions of posts that perform well in English has a high return for minimal marginal effort.
I keep page speed tight and use structured data on every post — Article schema, BreadcrumbList, FAQPage where the content warrants it.
The Honest Summary#
Traditional SEO fundamentals — fast pages, clean structure, specific content, inbound links — still work and still matter. The sites winning AI citations are not doing something exotic. They're the ones with genuine depth on specific topics, clean technical implementation, and a track record of publishing things worth referencing.
The shift in 2026 is not that the rules changed. It's that the margin for mediocre content has collapsed. Broad, shallow, well-optimized posts used to rank. Now they get summarized by an AI and never clicked. The posts that still drive traffic are the ones that go deep enough to be useful after the AI has given the overview.
Write those. Everything else is noise.
Freelance
Precisa de ajuda com isso?
Posso ajudar com migrações, novos produtos e performance web.
Entrar em contato →