Before shipping Stackra's GEO audit, we ran the scan against our own site. The results gave us a concrete list of gaps. This post documents each one, what we changed, and why. The four changes are practical enough to apply to any site. Our stack is React and Express, but the decisions translate to any technology.

What the audit found

Three gaps in the initial scan:

  • No explicit AI bot rules in robots.txt: no allow or disallow for GPTBot, OAI-SearchBot, ClaudeBot, or Google-Extended.
  • No Organization schema: AI tools had no structured signal identifying who runs the site or what the business does.
  • Accordion DOM content invisible to crawlers: the About page had nine FAQ questions and answers, but they were inside a Radix UI accordion that unmounts closed content from the DOM. Bots saw the question headings and nothing else.

Change 1: AI bot access in robots.txt

The simplest fix. We added explicit allow rules for each major AI crawler using the standard user-agent and allow pattern:

User-agent: GPTBot Allow: / User-agent: OAI-SearchBot Allow: / User-agent: ClaudeBot Allow: / User-agent: Google-Extended Allow: / User-agent: PerplexityBot Allow: /

We include PerplexityBot in the allow list for intent clarity, with one practical caveat: blocking PerplexityBot in robots.txt is not reliably enforced. Cloudflare has documented that PerplexityBot does not consistently honor disallow rules. Allowing it is the correct posture. But unlike the other four bots, the rule is a statement of intent rather than a technical control.

Change 2: Organization schema

We added an Organization schema block to the site, present on every page, covering business name, URL, description, and logo. This is the primary entity signal AI tools use to identify and categorize a site in their knowledge representations. Without it, the site is structurally anonymous to any AI system that relies on structured data for entity recognition. One thing worth noting: Google restricted FAQPage rich results to healthcare and government sites in August 2023. For a SaaS product like Stackra, FAQPage generates no rich result benefit and is not counted in our citabilityTypeCount. If you run a medical practice or government service, the situation is different — FAQPage is still a first-class citability signal for those site types.

Change 3: Accordion DOM content

The FAQ answers lived inside a shadcn/ui Accordion component, which uses Radix UI primitives under the hood. Radix completely removes closed AccordionContent from the DOM as a performance optimization. The result: any bot fetching the About page HTML would see nine question headings and zero answers. Google eventually renders JavaScript and can recover the content; AI fetchers like GPTBot and ClaudeBot never execute JavaScript at all.

The fix was to replace the Radix accordion with native HTML details and summary elements. The browser's details element always keeps its content in the DOM, regardless of whether the item is open or closed. CSS and the Tailwind group modifier handle the visual expand and collapse behavior. The FAQ answers are now present in the raw HTML that every bot reads, including AI fetchers, Googlebot, and any other tool that reads page HTML without rendering JavaScript.

Change 4: Prerendering for AI crawlers

AI fetchers do not execute JavaScript. Stackra's frontend is a React app, which means every page starts as a near-empty HTML shell until JavaScript runs and renders the content. We already had a prerendering pipeline in place: Playwright visits each page at build time, captures the fully-rendered HTML, and the production server serves that HTML to any bot that requests it. The GEO fix was confirming the pipeline covered the pages that existed at the time and that the prerendered HTML contained the schema blocks added in Change 2. As new marketing pages are added later (platform pages, the GEO landing page, legal pages), each one must be registered in the prerender route list to avoid shipping an empty SPA shell to crawlers.

What changed in the GEO scan results

After the three changes, every GEO signal group moved from absent or unknown to confirmed. AI bot access: all five crawlers explicitly allowed. Schema readiness: Organization present, BreadcrumbList already present from prior structured data work, Article schema on blog pages. Entity clarity: business name detected from Organization schema, confidence rated high. Supporting signals: sitemap reachable, robots.txt reachable. The accordion fix meant that FAQ content which was previously invisible to every AI crawler is now present in the raw HTML.