React single-page applications serve a near-empty HTML shell on first load. A div with an id, a script tag, and nothing else. The browser downloads JavaScript, executes it, and renders the page. Humans see the finished result. Crawlers and AI fetchers typically do not execute JavaScript. They see the shell.

Static prerendering fixes this by capturing each page's fully-rendered HTML at build time and serving that to everyone. But the prerender pipeline only knows about the routes you tell it about. When you add a new page to the app, the router picks it up automatically. The prerender does not.

What happened at Stackra

Stackra's prerender pipeline originally covered the homepage, about page, benchmarks, blog index, and every blog article. Over time, seven new marketing pages were added to the app:

  • GEO landing page (/geo)
  • Platform pages (/platforms, /platforms/wordpress, /platforms/shopify, /platforms/wix)
  • Privacy policy (/privacy)
  • Terms of service (/terms)

Each page was added to the React router, styled, tested in the browser, and deployed. Everything looked correct to visitors because their browsers executed the JavaScript. But none of the seven were added to the prerender script's route list. When a crawler or AI fetcher requested any of them, the server had no prerendered file to serve. It fell back to the SPA shell: an empty div.

Why it went unnoticed

Three things let this slip through without any warning:

  • Silent failure: the prerender step was wrapped in a try/catch that swallowed all errors. If a route was missing from the list, the build succeeded without comment.
  • No output validation: the pipeline wrote whatever HTML it captured to disk without checking whether it contained meaningful content. A 200-byte empty shell was treated the same as a 40KB fully-rendered page.
  • Browser-based testing: manual QA and PageSpeed Insights both execute JavaScript. The pages looked complete in every test because the test environment ran the SPA normally.

A second problem: scroll-reveal animations

Even the pages that were in the prerender route list had a subtler issue. Many marketing pages use scroll-reveal animations powered by IntersectionObserver. Content below the fold starts with opacity set to zero and animates in as the user scrolls. The prerender used a headless browser, but the headless browser's IntersectionObserver never fired for off-screen elements. The HTML was captured with every below-fold section present in the DOM but styled as invisible.

The DOM contained the content. The CSS hid it. From a crawler's perspective, the page had a hero section and then nothing. Every testimonial, feature explanation, FAQ, and call to action below the fold was invisible in the prerendered output.

The fix: three layers

The solution addressed route coverage, scroll-reveal visibility, and validation in a single pass.

Layer 1: FakeIntersectionObserver

Before any page loads in the headless browser, a script is injected at the browser context level that replaces the native IntersectionObserver with a fake version. The fake immediately calls every registered callback with isIntersecting set to true. Every scroll-reveal element fires its animation instantly, becoming visible before the page is captured. As a safety net, any remaining elements with the scroll-reveal class that are still not marked visible get force-set after the page settles.

Layer 2: Route registration

All marketing routes now live in a single MARKETING_ROUTES array in the prerender script. Adding a new page to the app requires adding it to this list. Blog articles are handled separately: the script reads every slug from the BLOG_ARTICLES array and generates routes automatically. No manual registration needed for blog posts.

Layer 3: Post-render validation

After every route is rendered and saved to disk, a validation pass checks each expected file:

  • File exists on disk
  • File size meets a minimum threshold (5KB for marketing pages, 3KB for blog articles)
  • HTML contains an h1 element
  • HTML contains the site brand name

If any check fails for any route, the build exits with an error. No silent deployment of empty shells.

The checklist for adding a new page

This is the process that prevents the gap from reopening. Every new marketing page needs these steps before deployment:

  • Add the route to MARKETING_ROUTES in scripts/prerender.ts
  • Add structured data to PAGE_SCHEMAS in the same file (WebPage schema at minimum)
  • Add the URL to client/public/sitemap.xml with the correct lastmod date
  • Run a local build and verify the prerendered file exists in dist/prerendered/ with expected content
  • After deployment, fetch the page URL without JavaScript (curl or similar) and confirm the HTML contains the page content

What this is not

This is not a story about a broken pipeline. The pipeline worked correctly for every route it knew about. The gap was organizational: no step in the page-creation process required registering the new page with the prerender. The build did not enforce it. The tests did not check for it. The gap existed in the space between "the page works in a browser" and "the page is visible to everything that reads HTML." Making that gap impossible to miss is what the validation layer does.