Is your site agent-ready?

Scan any URL against the discovery and access protocols AI agents actually look for in 2026 — robots.txt, Content Signals, Link headers, API catalog, MCP server card, OAuth, and more. See what passes, what fails, and exactly what to add next.

Free · no signup17+ checks~5 sec per scan
https://
Public scan · shareable link~5 second scan
Why it matters

Get found by agents

AI agents (ChatGPT plugins, Claude tool use, Cursor MCP) probe well-known endpoints before crawling pages. If yours are missing, they move on to a competitor that has them.

Pass the level ladder

Bot-Aware → Agent-Readable → Agent-Friendly → Agent-Native. Each step unlocks more agent-driven discovery and integration. The scan tells you exactly which check is blocking your next level.

FAQ

Questions about agent-readiness.

What do we check?

Multiple checks across 5 categories — Discoverability (robots.txt, sitemap, Link response headers), Content Accessibility (Markdown content negotiation), Bot Access Control (AI bot rules in robots.txt, Content Signals, Web Bot Auth), Agent Discovery (MCP Server Card, Agent Skills, WebMCP, API Catalog, OAuth metadata), and Agent Commerce (x402, MPP, UCP, ACP).

What's the easiest way to improve my score?

Start with the easy wins: a valid robots.txt with explicit AI bot rules and a Sitemap directive, plus a homepage Link header that points to /.well-known/api-catalog and your docs. Two files and one header line is usually enough to clear Level 2.

Do I need an MCP Server Card if I have no MCP server?

No. The MCP Server Card check only matters if you actually expose tools to AI agents (Claude, Cursor, ChatGPT plugins). If you don't, leave it failing — it's not penalizing you, it's just signalling "no MCP surface here".

Should I block GPTBot, ClaudeBot, PerplexityBot in robots.txt?

For most marketing sites: no. These crawlers feed the models that answer questions about your category — block them and you become invisible in AI search. Add Content-Signal rules instead (`ai-train=yes, ai-input=yes, search=yes`) to declare consent without losing visibility.

Does my site need an OpenAPI spec?

Only if you publish a public API and want agents to auto-generate clients. The API Catalog check passes with just `service-doc` (a link to human-readable docs) and `status` (a health endpoint) — no OpenAPI required. Add `service-desc` later when there's a real consumer.

Will the scan affect my SEO or rankings?

No. We GET a handful of public files (/robots.txt, /sitemap.xml, /.well-known/*, /api-docs) with a neutral User-Agent. We don't crawl the rest of the site, store cookies, or send anything to Google.

How often should I re-scan?

After every meaningful change: a robots.txt edit, a new .well-known file, a header config update. The scan is fresh each run — no caching on our side — so you can verify a fix in seconds.

Where can I learn more?

Cloudflare publishes a comprehensive guide on building agent-ready sites: docs.cloudflare.com/fundamentals/reference/markdown-for-agents/. Each failed check in the scan output also links the relevant RFC or spec.