Your website may look healthy on Google today, but that no longer guarantees you’ll be found tomorrow. A growing share of customers are skipping traditional search entirely and turning to AI tools for recommendations, comparisons and answers.
If those systems can’t see or understand your site, you’re effectively invisible at the very moment buying decisions are being shaped. For businesses that want to remain competitive online, adapting to how AI-powered search works isn’t a future concern, it’s an immediate one.
Your site is on page one. Organic traffic looks healthy. Job done.
But revealing new research has unveiled that 60% of internet users now use AI assistants as their first port of call for search, according to Bain & Company. A quarter start in ChatGPT or Perplexity before they ever open Google. And key to all of this is knowing that most AI crawlers – which gather the material these Large Language Model (LLMs) feed off – can’t read your website.
Where your customers are searching
AI-powered search isn’t a niche behaviour anymore. Cloudflare’s 2025 data shows user-triggered AI crawling, the bots that fetch pages when someone asks ChatGPT a question, grew more than 15 times over the course of this year alone. That’s not training data collection; that’s real people asking AI for recommendations, comparisons, and answers in real time.
If your website can’t be read by these systems, the answer they get won’t include you.
Why AI crawlers can’t read most websites
Google has spent over 25 years building infrastructure to handle messy websites. Googlebot can execute code, wait for content to load, and process modern frameworks. It learned to cope with whatever developers threw at it.
AI crawlers haven’t had that time. Vercel and MERJ , collaborative partner companies, primarily known for their joint research into how search engines crawl and index content, tracked over 500 million requests from OpenAI’s GPTBot and found zero evidence of code execution. The same applies to Anthropic’s ClaudeBot and PerplexityBot – none of the major AI crawlers currently render JavaScript.
What does that mean in practice? If your website loads its main content dynamically – which most modern websites do – AI crawlers see an empty page. They’re reading your site the way browsers did 20 years ago, before the technology that powers most of today’s web even existed.
What happens when you’re invisible
When AI can’t read your content, it doesn’t return an empty answer. It fills the gap with whatever it can find.
Sometimes that means citing a competitor. Your potential customer asks for a recommendation, and someone else gets mentioned because their site was easier to read.
Sometimes it’s worse – AI tools present made-up information about your products, services, or company as fact. Without your actual content to ground the response, there’s nothing stopping hallucination.
Either way, the information reaching your customer comes from somewhere else, or nowhere at all. You’ve lost control of your own narrative in a channel that’s growing fast.
What to ask your team for
None of this requires chasing a new tactic or gaming another algorithm. The fixes are foundational – things your developers may already know how to do, but haven’t prioritised for this use case.
Content that doesn’t need code to display. Ask whether your key pages work without JavaScript. If your main content only appears after the browser runs code, AI crawlers won’t see it. The technical solutions (server-side rendering, static generation) are well-established. Your dev team will know what you mean.
A site structure that’s easy to navigate. Redirect chains, broken links and convoluted URL structures waste AI crawlers’ limited patience. Ask for a flat, logical structure with clear internal linking – good for users, good for Google and now essential for AI visibility too.
Context for machines, not just humans. Schema markup (also called structured data) helps AI systems understand what your content means. It clarifies whether “Apple” refers to the fruit or the company. It tells AI which page describes your services and how your content connects to broader topics. Ask whether your key pages have valid schema in place.
A deliberate decision on crawler access. Some sites accidentally block AI crawlers entirely. Others haven’t considered whether they should allow access at all. Ask your team to check your robots.txt file and make a conscious choice about which crawlers can see your content.
Three checks you can run yourself
1. View your site with JavaScript disabled. In Chrome, open Developer Tools, press Cmd+Shift+P (Mac) or Ctrl+Shift+P (Windows), type “Disable JavaScript” and select it. What you see now is roughly what AI crawlers see. If your main content disappears, you have a visibility problem.
2. Look at your robots.txt file. Visit yoursite.com/robots.txt and search for GPTBot, ClaudeBot, or PerplexityBot. If you see “Disallow: /” next to any of them, your site is blocking that crawler entirely. Decide if that’s intentional.
3. Test your structured data. Google’s Rich Results Test shows whether your schema markup is valid and present. Missing or broken markup means AI has less context to work with.
If any of those checks flag something – or you’re not sure what the results mean – we’re happy to help. And if you don’t have a dev team to ask, get in touch to find out how we can help you understand whether you’ve been impacted.
Back to blog