What AI Search Actually Looks Like at a Fortune 500

After running real tests — crawl data, vendor POCs, citation audits — here's what's actually useful vs. noise in AI search optimization.

seoai searchGEOAEO

What AI Search Actually Looks Like at a Fortune 500


There’s a lot of content right now about “GEO optimization” and “AI search visibility.” Most of it is written by people who are theorizing.

I’m not theorizing. I manage SEO at a large financial services firm, and for the past several months I’ve been running real tests — pulling AI crawler traffic data, running vendor POCs, doing citation audits across ChatGPT, Perplexity, and Gemini. This is what it actually looks like from the inside.


The panic is real, but it’s also premature

Every enterprise SEO team is getting pressure from leadership right now. The question is always some version of: “Are we showing up in ChatGPT?”

The honest answer for most companies is: sometimes, inconsistently, and it’s hard to measure. That’s not a satisfying answer, which is why the vendor market has exploded with people selling certainty.

The reality is that AI search citation is still early and somewhat unpredictable. The signals that drive it overlap significantly with what’s always driven good SEO — authority, clarity, relevance. The companies panicking and doing a full site rewrite are mostly wasting time.


What the crawl data actually shows

One of the first things we did was pull our AI crawler traffic as a baseline. GPTBot, ClaudeBot, PerplexityBot — these are real crawlers with real volume, and they behave differently from Googlebot in ways that matter.

A few things stood out:

Crawl frequency is lower but more targeted. AI crawlers don’t hammer your site the way aggressive SEO bots do. They tend to crawl deeper into content pages and spend more time on long-form pieces.

Bot timeouts are a real problem. We saw GPTBot timing out on certain page types — likely due to JavaScript rendering delays. If your site is heavily JS-dependent and a crawler can’t render the page, that content effectively doesn’t exist to that system.

Bad request rates vary by crawler. Not all AI crawlers handle redirects and URL patterns the same way. Some are more forgiving than others.

The takeaway: your technical SEO foundation matters for AI crawlability, just as it does for Google. Slow pages, broken redirects, and rendering issues all create friction.


What vendors are actually selling

We ran a POC with an external vendor focused specifically on AI search visibility. Here’s what the actual work looked like, stripped of the marketing language:

  1. Citation audits — testing target queries across AI platforms to see who gets cited and why
  2. Content restructuring — rewriting key pages to lead with direct answers to specific questions
  3. Schema and structured data — making sure content signals are explicit, not implied
  4. E-E-A-T reinforcement — author bios, sourcing, credentials, all the signals that tell an AI system “this is trustworthy”

None of this is wrong. All of it is useful. But none of it is magic, and most of it overlaps with things a good SEO team should already be doing.

The honest version of the vendor pitch is: “We’ll help you do good content and technical SEO, but framed around AI systems instead of just Google.” That’s valuable — especially if your team needs a framework to prioritize — but it’s not a new discipline from scratch.


What actually drives AI citation

Based on everything we’ve tested, here’s what I believe moves the needle:

1. Being the clearest answer to a specific question. AI systems are essentially answer engines. If your content answers a question more directly, completely, and accessibly than anyone else, you have a real shot at being cited. Vague thought leadership pieces don’t get cited. Specific, opinionated, well-structured answers do.

2. Domain authority still matters. AI crawlers are trained on the web and continue to use web signals. A high-authority domain with strong backlink profiles gets more trust. This isn’t going away.

3. Content that’s easy to extract from. Think about how an LLM reads a page — it’s looking for coherent, quotable chunks of information. Short paragraphs. Clear headings. Concrete claims. If your content is dense, jargon-heavy, or structured like a legal document, it’s harder for an AI to pull a clean answer from it.

4. Freshness on fast-moving topics. For queries where recency matters, AI systems seem to favor recently updated content. If you’re covering a topic that evolves, keeping content current is more important than ever.


The one test you should run this week

Before spending money on any tool or vendor, do this:

Open ChatGPT, Perplexity, and Gemini. Type the 3–5 questions your ideal customer asks before making a decision in your space. See who gets cited.

If it’s you — great. Understand why and do more of it.

If it’s a competitor — look at their content. What are they doing that you aren’t? Is it more specific? Better structured? More authoritative?

If it’s nobody in your industry — that’s actually an opportunity. Be the first to create content that answers those questions clearly, and you have a real shot at owning that citation.

No tools required. Just honesty about where your content stands.


What I’m doing next

I’ll keep writing about this as we learn more. The field is moving fast and I’d rather share real findings than wait until I have a tidy conclusion.

If you’re navigating similar questions at your company — or if you’re a smaller brand trying to figure out where to start — feel free to reach out. I’m also taking on a small number of AI search audits for companies that want a clearer picture of where they stand.