Table of Contents
AImpactScanner started with a simple question: what does AI actually see when it looks at your website? Not what Google sees — that's been well-understood for twenty years. What does ChatGPT see? What does Perplexity see? When an AI system evaluates your site for potential citation, what signals does it find, and which ones are missing?
The answer, for most sites, was: not much. And there was no tool that could tell them specifically what to fix. That's the gap AImpactScanner was built to fill.
The Gap We Set Out to Fill
When we started building AImpactScanner, the AI search optimisation landscape had a specific problem: diagnosis tools existed for traditional SEO (Google Search Console, Ahrefs, Semrush), but nothing existed for AI-specific visibility factors.
You could check whether Google could index your pages. You couldn't check whether ChatGPT could find, understand, and cite them. The factors that matter for AI citation — structured data quality, entity recognition, authorship signals, content architecture, llms.txt presence — weren't being evaluated by any mainstream tool.
The MASTERY-AI Framework identified 27 factors across 8 pillars that influence AI citation likelihood. The question was whether these factors could be automatically evaluated in a way that gave businesses actionable guidance — not just a score, but a specific roadmap for improvement.
Connecting Analysis to Action: llms.txt Integration
The first major milestone beyond basic scanning was llms.txt integration. After running an analysis, Growth and Scale tier users can now generate optimised llms.txt files directly from their scan results.
This connects two previously separate workflows. Before integration, the process was: scan your site with one tool, identify issues, then use a separate tool to generate your llms.txt file. The problem was that the llms.txt generator didn't know anything about your scan results. It generated files without context about which pages scored well for AI readability and which didn't.
With integrated generation, the llms.txt file reflects actual content quality. Pages that scored highly for AI citation potential are prioritised. Pages with structural issues are noted. The file becomes a curated guide rather than a URL dump — because the scanner has already evaluated every page against the full MASTERY-AI Framework.
The integration was built on LLM.txt Mastery's API, with custom Edge Functions handling the analysis, generation, and download workflow. Paid-tier users see a dashboard panel with usage statistics, progress tracking, and direct download — gated by subscription tier to reflect the infrastructure cost of generation.
Why Tiered Access Matters
The pricing model reflects a deliberate decision: keep the diagnostic accessible while gating the more resource-intensive features.
- Free tier — 3 scans per month. Enough to evaluate your most important pages and understand where you stand. No llms.txt generation, no SPA rendering, no history.
- Solo ($4.95/mo) — 10 scans, 30 days of history. For freelancers and solopreneurs who need regular checks on their own site.
- Growth ($14.95/mo) — 40 scans, 25 llms.txt generations, 90 days of history. For businesses actively optimising their AI visibility and tracking improvement over time.
- Scale ($29.95/mo) — 100 scans, unlimited llms.txt, SPA rendering, unlimited history. For agencies and businesses managing multiple sites or running ongoing optimisation programmes.
The free tier is permanent — not a trial. The goal is that every business can at least diagnose their AI visibility without paying anything. The paid tiers unlock the tools to actually fix what the diagnosis reveals.
Technical Decisions Along the Way
Building a tool that scans websites at scale involves decisions that aren't obvious from the outside but have significant impact on the quality of results:
Edge Functions vs. Dedicated Backend
The initial architecture used serverless Edge Functions for scanning. This worked for basic analysis but hit limitations quickly — specifically a 60-second timeout that wasn't enough for JavaScript-heavy sites or thorough analysis. The migration to a dedicated backend with a proper async job queue eliminated this constraint and enabled the full 27-factor evaluation.
Authentication Architecture
User tier detection sounds trivial but proved tricky in practice. The initial implementation had a bug where user tiers consistently displayed as "free" — the dashboard component wasn't receiving the user's subscription data. The root cause was a missing prop in the top-level component, causing tier information to default to the free tier. A small fix with significant UX implications: users on paid plans were seeing free-tier limitations and generating support requests.
Staging and Production Isolation
Running separate staging and production environments seems like standard practice, but in the context of a scanning tool, it's essential. A bug in the scanning logic that produces incorrect scores can't be tested in production — it would give paying users wrong information. Proper environment isolation with dedicated staging allowed us to catch scoring issues before they reached real users.
What Makes This Different
The AI visibility tool market has grown rapidly, and most tools in it focus on a single capability: generating llms.txt files. AImpactScanner takes a different approach by treating llms.txt as one component of a larger diagnostic workflow.
The core difference is that AImpactScanner starts with diagnosis, not generation. It evaluates your site against 27 factors first, identifies specific gaps, and then — for paid users — generates an llms.txt file informed by that analysis. The file reflects what the scanner found, not just what URLs exist.
This matters because a good llms.txt file can't compensate for a poorly structured site. If your content lacks authorship signals, your schema is incomplete, and your pages aren't structured for AI readability, an llms.txt file won't fix those problems. It'll just direct AI to pages that aren't ready to be cited.
The value of the full loop — diagnose, optimise, generate, track — is that each step informs the next. You know what to fix before you generate. You know what improved after you fix it.
What's Next
The roadmap includes JavaScript rendering for all tiers (currently Scale-only), expanded factor coverage as AI systems evolve their citation preferences, and integration with monitoring tools for ongoing citation tracking. The goal is a complete AI visibility platform — not just a scanner, but a continuous feedback loop between your site's AI readiness and its actual citation performance.
If you're interested in where your site stands right now, the free tier gives you 3 scans per month — enough to evaluate your most important pages and see exactly what AI systems see when they look at your content.
What does AI actually see on your site?
Find out in 60 seconds with a free AI visibility scan across 27 factors.