Before shipping Stackra's GEO audit, we ran the scan against our own site. The results gave us a concrete list of gaps. This post documents each one, what we changed, and why. The four changes are practical enough to apply to any site. Our stack is React and Express, but the decisions translate to any technology.
What the audit found
Four gaps in the initial scan:
- •No explicit AI bot rules in robots.txt: no allow or disallow for GPTBot, OAI-SearchBot, ClaudeBot, or Google-Extended.
- •No Organization schema: AI tools had no structured signal identifying who runs the site or what the business does.
- •No FAQPage schema: the About page had nine FAQ questions and answers in content but nothing in structured data.
- •Accordion DOM content invisible to crawlers: the FAQ answers were in a Radix UI accordion that unmounts closed content from the DOM, making them absent from the raw HTML any bot reads.
Change 1: AI bot access in robots.txt
The simplest fix. We added explicit allow rules for each major AI crawler using the standard user-agent and allow pattern:
User-agent: GPTBot Allow: / User-agent: OAI-SearchBot Allow: / User-agent: ClaudeBot Allow: / User-agent: Google-Extended Allow: / User-agent: PerplexityBot Allow: /
We include PerplexityBot in the allow list for intent clarity, with one practical caveat: blocking PerplexityBot in robots.txt is not reliably enforced. Cloudflare has documented that PerplexityBot does not consistently honor disallow rules. Allowing it is the correct posture. But unlike the other four bots, the rule is a statement of intent rather than a technical control.
Change 2: Organization and FAQPage schema
We added two JSON-LD schema blocks to the site. The first is an Organization schema present on every page, covering business name, URL, description, and logo. This is the primary entity signal AI tools use to identify and categorize a site in their knowledge representations. Without it, the site is structurally anonymous to any AI system that relies on structured data for entity recognition.
The second is a FAQPage schema on the About page, with each question and answer pair encoded in the acceptedAnswer format. FAQPage and HowTo carry the highest AI citation rates across Perplexity, ChatGPT, and Google AI Overviews among all schema types. Adding FAQPage to a page that already had the content in natural language required no content changes, only the structured data layer on top of existing copy.
Change 3: Accordion DOM content
The FAQ answers lived inside a shadcn/ui Accordion component, which uses Radix UI primitives under the hood. Radix completely removes closed AccordionContent from the DOM as a performance optimization. The result: any bot fetching the About page HTML would see nine question headings and zero answers. Google eventually renders JavaScript and can recover the content; AI fetchers like GPTBot and ClaudeBot never execute JavaScript at all.
The fix was to replace the Radix accordion with native HTML details and summary elements. The browser's details element always keeps its content in the DOM, regardless of whether the item is open or closed. CSS and the Tailwind group modifier handle the visual expand and collapse behavior. The FAQ answers are now present in the raw HTML that every bot reads, including AI fetchers, Googlebot, and any other tool that reads page HTML without rendering JavaScript.
Change 4: Prerendering for AI crawlers
AI fetchers do not execute JavaScript. Stackra's frontend is a React app, which means every page starts as a near-empty HTML shell until JavaScript runs and renders the content. We already had a prerendering pipeline in place: Playwright visits each page at build time, captures the fully-rendered HTML, and the production server serves that HTML to any bot that requests it. The GEO fix was confirming the pipeline covered all pages where GEO signals live and that the prerendered HTML contained the schema blocks added in Change 2.
What changed in the GEO scan results
After all four changes, every GEO signal group moved from absent or unknown to confirmed. AI bot access: all five crawlers explicitly allowed. Schema readiness: Organization present, FAQPage present, BreadcrumbList already present from prior structured data work. Entity clarity: business name detected from Organization schema, confidence rated high. Supporting signals: sitemap reachable, robots.txt reachable. The citabilityTypeCount moved from zero to two, the threshold at which the audit reports meaningful schema coverage.