Audit websites for SEO, technical, content, performance and security issues using the squirrelscan cli. squirrelscan provides a cli tool squirrel - available for macos, windows and linux. It carries out extensive website auditing by emulating a browser, search crawler, and analyzing the website's structure and content against over 230+ rules. It will provide you a list of issues as well as suggestions on how to fix them.
Website Audit Skill
Audit websites for SEO, technical, content, performance and security issues using the squirrelscan cli.
squirrelscan provides a cli tool squirrel - available for macos, windows and linux. It carries out extensive website auditing by emulating a browser, search crawler, and analyzing the website's structure and content against over 230+ rules.
It will provide you a list of issues as well as suggestions on how to fix them.
Install to ~/.local/share/squirrel/releases/{version}/
Create a symlink at ~/.local/bin/squirrel
Initialize settings at ~/.squirrel/settings.json
If ~/.local/bin is not in your PATH, add it to your shell configuration:
export PATH="$HOME/.local/bin:$PATH"
Windows Installation
Install using PowerShell:
irm https://squirrelscan.com/install.ps1 | iex
This will:
Download the latest release binary
Install to %LOCALAPPDATA%\squirrel\
Add squirrel to your PATH
If using Command Prompt, you may need to restart your terminal for PATH changes to take effect.
Verify Installation
Check that squirrel is installed and accessible:
squirrel --version
Setup
Running squirrel init will setup a squirrel.toml file for configuration in the current directory.
Each project should have a squirrel project name for the database - by default this is the name of the website you audit - but you can set it yourself so that you can place all audits for a project in one database
You do this either on init with:
squirrel init --project-name my-project
# or with aliases
squirrel init -n my-project
# overwrite existing config
squirrel init -n my-project --force
`or config:`
squirrel config set project.name my-project
If there is no squirrel.toml in the directory you're running from CREATE ONE with squirrel init and specify the '-n' parameter for a project name (infer this)
The project name is used to identify the project in the database and is used to generate the database name.
It is stored in ~/.squirrel/projects/
Usage
Intro
There are three processes that you can run and they're all cached in the local project database:
crawl - subcommand to run a crawl or refresh, continue a crawl
analyze - subcommand to analyze the crawl results
report - subcommand to generate a report in desired format (llm, text, console, html etc.)
the 'audit' command is a wrapper around these three processes and runs them sequentially:
squirrel audit https://example.com --format llm
YOU SHOULD always prefer format option llm - it was made for you and provides an exhaustive and compact output format.
FIRST SCAN should be a surface scan, which is a quick and shallow scan of the website to gather basic information about the website, such as its structure, content, and technology stack. This scan can be done quickly and without impacting the website's performance.
SECOND SCAN should be a deep scan, which is a thorough and detailed scan of the website to gather more information about the website, such as its security, performance, and accessibility. This scan can take longer and may impact the website's performance.
If the user doesn't provide a website to audit - extrapolate the possibilities in the local directory and checking environment variables (ie. linked vercel projects, references in memory or the code).
If the directory you're running for provides for a method to run or restart a local dev server - run the audit against that.
If you have more than one option on a website to audit that you discover - prompt the user to choose which one to audit.
If there is no website - either local, or on the web to discover to audit, then ask the user which URL they would like to audit.
You should PREFER to audit live websites - only there do we get a TRUE representation of the website and performance or rendering issuers.
If you have both local and live websites to audit, prompt the user to choose which one to audit and SUGGEST they choose live.
You can apply fixes from an audit on the live site against the local code.
When planning scope tasks so they can run concurrently as sub-agents to speed up fixes.
When implementing fixes take advantage of subagents to speed up implementation of fixes.
Run typechecking and formatting against generated code when you finish if available in the environment (ruff for python, biome and tsc for typescript etc.)
Basic Workflow
The audit process is two steps:
Run the audit (saves to database, shows console output)
Export report in desired format
# Step 1: Run audit (default: console output)
squirrel audit https://example.com
# Step 2: Export as LLM format
squirrel report <audit-id> --format llm
Regression Diffs
When you need to detect regressions between audits, use diff mode:
# Compare current report against a baseline audit ID
squirrel report --diff <audit-id> --format llm
# Compare latest domain report against a baseline domain
squirrel report --regression-since example.com --format llm
Diff mode supports console, text, json, llm, and markdown. html and xml are not supported.
Running Audits
When running an audit:
Fix ALL issues - critical, high, medium, and low priority
Don't stop early - continue until score target is reached (see Score Targets below)
Parallelize fixes - use subagents for bulk content edits (alt text, headings, descriptions)
Only pause for human judgment - broken links may need manual review; everything else should be fixed automatically
Show before/after - present score comparison only AFTER all fixes are complete
IMPORTANT: Fix ALL issues, don't stop early.
Iteration Loop: After fixing a batch of issues, re-audit and continue fixing until:
Score reaches target (typically 85+), OR
Only issues requiring human judgment remain (e.g., "should this link be removed?")
Treat all fixes equally: Code changes (*.tsx, *.ts) and content changes (*.md, *.mdx, *.html) are equally important. Don't stop after code fixes.
Parallelize content fixes: For issues affecting multiple files:
Spawn subagents to fix in parallel
Example: 7 files need alt text → spawn 1-2 agents to fix all
Example: 30 files have heading issues → spawn agents to batch edit
Don't ask, act: Don't pause to ask "should I continue?" - proceed autonomously until complete.
Completion criteria:
✅ All errors fixed
✅ All warnings fixed (or documented as requiring human review)
✅ Re-audit confirms improvements
✅ Before/after comparison shown to user
✅ Site is complete and fixed (scores above 95 with full coverage)
Run multiple audits to ensure completeness and fix quality. Prompt the user to deploy fixes if auditing a live production, preview, staging or test environment.
Score Targets
Starting Score
Target Score
Expected Work
< 50 (Grade F)
75+ (Grade C)
Major fixes
50-70 (Grade D)
85+ (Grade B)
Moderate fixes
70-85 (Grade C)
90+ (Grade A)
Polish
> 85 (Grade B+)
95+
Fine-tuning
A site is only considered COMPLETE and FIXED when scores are above 95 (Grade A) with coverage set to FULL (--coverage full).
Don't stop until target is reached.
Issue Categories
Category
Fix Approach
Parallelizable
Meta tags/titles
Edit page components or metadata.ts
No
Structured data
Add JSON-LD to page templates
No
Missing H1/headings
Edit page components + content files
Yes (content)
Image alt text
Edit content files
Yes
Heading hierarchy
Edit content files
Yes
Short descriptions
Edit content frontmatter
Yes
HTTP→HTTPS links
Bulk sed/replace in content
Yes
Broken links
Manual review (flag for user)
No
For parallelizable fixes: Spawn subagents with specific file assignments.
Content File Fixes
Many issues require editing content files (*.md, *.mdx). These are equally important as code fixes:
Image alt text: Edit markdown image tags to add descriptions
Heading hierarchy: Change ### to ## where H2 is skipped
Meta descriptions: Extend excerpt in frontmatter to 120+ chars
HTTP links: Replace http:// with https:// in all links
For 5+ files needing the same fix type, spawn a subagent:
Task: Fix missing alt text in 6 posts
Files: [list of files]
Pattern: Find `
Fixes have no dependencies on each other
Files are independent (not importing from each other)
Subagent prompt structure:
Fix [issue type] in the following files:
- path/to/file1.md
- path/to/file2.md
- path/to/file3.md
Pattern: [what to find]
Fix: [what to change]
Do not ask for confirmation. Make all changes and report what was fixed.
Example - parallel alt text fixes:
When audit shows 12 files missing alt text, spawn 2-3 subagents in a SINGLE message:
[Task tool call 1]
subagent_type: "general-purpose"
prompt: |
Fix missing image alt text in these files:
- content/blog/post-1.md
- content/blog/post-2.md
- content/blog/post-3.md
- content/blog/post-4.md
Find images without alt text ( or <img without alt=).
Add descriptive alt text based on image filename and context.
Do not ask for confirmation.
[Task tool call 2]
subagent_type: "general-purpose"
prompt: |
Fix missing image alt text in these files:
- content/blog/post-5.md
- content/blog/post-6.md
- content/blog/post-7.md
- content/blog/post-8.md
[same instructions...]
[Task tool call 3]
subagent_type: "general-purpose"
prompt: |
Fix missing image alt text in these files:
- content/blog/post-9.md
- content/blog/post-10.md
- content/blog/post-11.md
- content/blog/post-12.md
[same instructions...]
`**Example - parallel heading fixes:**`
[Task tool call 1]
Fix H1/H2 heading hierarchy in: docs/guide-1.md, docs/guide-2.md, docs/guide-3.md
Change ### to ## where H2 is skipped. Ensure single H1 per page.
[Task tool call 2]
Fix H1/H2 heading hierarchy in: docs/guide-4.md, docs/guide-5.md, docs/guide-6.md
[same instructions...]
Surface mode is smart - it detects URL patterns like /blog/{slug} or /products/{id} and only crawls one sample per pattern. This makes it efficient for sites with many similar pages (blogs, e-commerce).
# Quick health check (25 pages, no link discovery)
squirrel audit https://example.com -C quick --format llm
# Default surface audit (100 pages, pattern sampling)
squirrel audit https://example.com --format llm
# Full comprehensive audit (500 pages)
squirrel audit https://example.com -C full --format llm
# Override page limit for any mode
squirrel audit https://example.com -C surface -m 200 --format llm
When to use each mode:
quick: CI pipelines, daily health checks, monitoring
surface: Most audits - covers unique templates efficiently
full: Before launches, comprehensive analysis, deep dives
# User asks: "Check squirrelscan.com for SEO issues"
squirrel audit https://squirrelscan.com --format llm
`### Example 2: Deep Audit for Large Site`
# User asks: "Do a thorough audit of my blog with up to 500 pages"
squirrel audit https://myblog.com --max-pages 500 --format llm
`### Example 3: Fresh Audit After Changes`
# User asks: "Re-audit the site and ignore cached results"
squirrel audit https://example.com --refresh --format llm
`### Example 4: Two-Step Workflow (Reuse Previous Audit)`
# First run an audit
squirrel audit https://example.com
# Note the audit ID from output (e.g., "a1b2c3d4")
# Later, export in different format
squirrel report a1b2c3d4 --format llm
Output
On completion give the user a summary of all of the changes you made.
Troubleshooting
squirrel command not found
If you see this error, squirrel is not installed or not in your PATH.