Table of Contents
AImpactScanner launched with a straightforward goal: scan any website and evaluate its AI search visibility across the MASTERY-AI Framework. It worked well — for sites built with traditional server-rendered HTML. But a growing percentage of scans were returning artificially low scores, and the reason was clear: JavaScript.
Sites built with React, Vue, Angular, and other single-page application (SPA) frameworks render their content in the browser, not on the server. When AImpactScanner fetched these pages, it saw the initial HTML shell — an empty <div> and a script tag — not the actual content. The scores reflected a blank page, not the site users actually see.
That's now fixed. AImpactScanner has rebuilt its entire backend with full JavaScript rendering and expanded its evaluation from 19 factors to 27.
The Problem with JavaScript-Heavy Sites
Single-page applications are everywhere. React, Vue, Angular, Next.js, Nuxt, SvelteKit — these frameworks power a significant portion of modern websites. They deliver fast, interactive experiences for users. But they create a problem for any tool that needs to read the page programmatically.
A traditional server-rendered page sends complete HTML to the browser. Every heading, paragraph, and image is in the source code. A JavaScript SPA sends a minimal HTML shell and a bundle of JavaScript that builds the page after it loads.
Most web analysis tools — including the previous version of AImpactScanner — only see the initial HTML. For a React app, that's typically just:
<div id="root"></div>
<script src="/bundle.js"></script>
No headings. No structured data. No content. The scan would report missing schema, no author signals, thin content — not because the site lacked these things, but because the scanner couldn't see them.
Full JavaScript Rendering
The fix required rebuilding the scanning infrastructure from the ground up. AImpactScanner migrated from serverless edge functions to a dedicated backend running Puppeteer — a headless Chrome browser that executes JavaScript exactly as a real browser does.
The new scanning process works in three steps:
- Detection — The scanner identifies whether the page is server-rendered or a JavaScript SPA by analysing the initial HTML response
- Rendering — For SPAs, Puppeteer loads the page, waits for JavaScript to execute, and captures the fully rendered DOM — the same content a real user sees
- Analysis — The 27-factor evaluation runs against the rendered page, not the raw HTML shell
This means a React marketing site, a Vue e-commerce store, or an Angular SaaS dashboard now receives the same thorough analysis as a static HTML site. The score reflects actual content and structure, not a blank shell.
From 19 Factors to 27
The original AImpactScanner evaluated 19 factors across the MASTERY-AI Framework — all focused on AI-specific visibility signals. During development, a pattern became clear: sites with poor traditional SEO almost always had poor AI visibility too.
AI SEO is fundamentally limited by basic SEO. If search engines can't index your site, AI systems can't cite it. If your pages load slowly, AI crawlers may time out. If you have broken links, AI can't follow your content architecture.
The expanded evaluation now covers 27 factors across 9 pillars, adding 8 traditional SEO checks that directly impact AI discoverability.
The 8 New SEO Factors
Each new factor addresses a foundational issue that, when broken, undermines every AI-specific optimisation you've implemented:
- Indexability Status — Can search engines actually index your pages? Noindex tags, canonicalisation errors, and crawl restrictions can make your content invisible to both search engines and AI.
- Mobile-Friendliness — Google's mobile-first indexing means the mobile version of your site is the version that matters. AI systems inherit these signals.
- Page Speed — Slow pages get crawled less frequently and may time out during AI retrieval. Core Web Vitals directly affect how often your content is available for AI citation.
- Broken Links — Broken internal and external links signal neglect and reduce crawl efficiency. AI systems use link structure to understand content relationships.
- Sitemap Presence — A well-structured sitemap helps both search engines and AI crawlers discover your full content library, especially pages that aren't well-linked internally.
- Canonical Tags — Duplicate content without proper canonicalisation splits authority signals. AI systems may cite the wrong version of your page — or avoid citing either.
- Internal Linking — Strong internal link structure helps AI systems understand your content hierarchy and topic relationships. Orphaned pages rarely get cited.
- Duplicate Versions — www vs. non-www, HTTP vs. HTTPS — if both versions exist without redirects, you're splitting every signal in half.
These aren't glamorous factors. They're plumbing. But broken plumbing undermines everything built on top of it.
Built-In llms.txt Generation
Growth and Scale tier users can now generate llms.txt files directly from their scan results. After AImpactScanner analyses your site, it uses the same data to identify which pages have the highest AI citation potential and generates a prioritised llms.txt file.
This connects two previously separate workflows: diagnosing AI visibility issues and optimising your site's AI discoverability through llms.txt. The generated file reflects actual content quality, not just a URL dump — because the scanner has already evaluated every page against the full MASTERY-AI Framework.
Updated Pricing Tiers
The expanded capabilities come with updated pricing that reflects the infrastructure cost of JavaScript rendering:
| Tier | Price | Monthly Scans | llms.txt | SPA Rendering | History |
|---|---|---|---|---|---|
| Free | $0 | 3 | — | — | — |
| Solo | $4.95/mo | 10 | — | — | 30 days |
| Growth | $14.95/mo | 40 | 25/mo | — | 90 days |
| Scale | $29.95/mo | 100 | Unlimited | 100/mo | Unlimited |
The free tier still provides 3 scans per month — enough to evaluate your most important pages and understand where you stand. SPA rendering is available on the Scale tier, where the infrastructure cost of headless browser sessions is offset by the subscription.
What This Means for Your Site
If you've scanned your site before and received a score that felt lower than expected, it's worth scanning again. JavaScript rendering means the scanner now sees what your users see, and the 8 new SEO factors provide a more complete picture of your AI readiness.
If your site is built with React, Vue, Angular, or any SPA framework, this update is directly relevant. Previous scans would have evaluated an empty page. The new scan evaluates your actual content.
The expanded factor set also means your score may shift — in either direction. Sites with strong traditional SEO foundations may see their scores increase as those signals are now counted. Sites with underlying SEO issues (broken links, missing sitemaps, duplicate versions) may see areas for improvement that weren't previously flagged.
Either way, you now have a more accurate picture of how AI systems perceive your site. And that's the starting point for everything else.
Your last scan might have missed the full picture
Re-scan with full JavaScript rendering and 27-factor analysis — free.