UStackUStack
Is Your Site Agent-Ready? icon

Is Your Site Agent-Ready?

Isitagentready.com scans your site to assess AI-agent readiness—discoverability, content accessibility, bot access control, API/auth discovery, and commerce standards.

Is Your Site Agent-Ready?

What is Is Your Site Agent-Ready?

Isitagentready.com is a website scan tool that checks how ready a site is for AI agents. The scan evaluates emerging standards and signals across multiple categories—such as discoverability, content accessibility, bot access control, API/auth discovery, and agentic commerce—so you can identify what to adjust to improve an agent’s ability to browse and interact with your site.

The core purpose is to provide a structured “agent readiness” score and a concrete checklist of checks (and related requirements) tied to common web standards and agent-facing interfaces like robots directives, sitemap/link metadata, and protocol-style components referenced in the scan.

Key Features

  • Configurable website scanning: Enter a website URL and customize which checks to run, rather than running a fixed full test every time.
  • Discoverability checks: Evaluates signals intended to help agents find your site content, including robots.txt, sitemap directives, and link response headers.
  • Content accessibility checks: Tests how easily agents can interpret your content using Markdown negotiation and bot access/content signals.
  • Bot access control validation: Checks for AI bot rules in robots.txt and related signals that control whether automated agents can access site resources.
  • API / auth / agent protocol discovery: Looks for agent-oriented endpoints and discovery artifacts referenced by the scan, including an MCP server card, Agent Skills, WebMCP, API catalog signals, OAuth discovery, and OAuth protected-resource discovery.
  • Agentic commerce compatibility checks: Includes commerce-related checks such as x402, UCP, and ACP as part of its broader agent-readiness assessment.

How to Use Is Your Site Agent-Ready?

  1. Open Isitagentready.com and provide the Website URL you want to evaluate.
  2. Optionally customize the scan by selecting which checks to run.
  3. Review the results grouped under the scan’s categories (discoverability, content accessibility, bot access control, API/auth/protocol discovery, and commerce checks).
  4. Use the tool’s guidance for the next improvements—starting with items called out as “easy wins” like publishing a valid robots.txt with AI bot rules and sitemap directives.

Use Cases

  • Website operators preparing for agent-based browsing: Audit whether robots.txt, sitemaps, and discovery signals are set up so AI agents can locate and access relevant pages.
  • Teams standardizing agent access policies: Validate that bot access control rules exist (including AI bot rules referenced in robots.txt) and identify missing or unclear directives.
  • Developers exposing agent-facing interfaces: Check for discovery artifacts related to API/auth and agent protocols, such as MCP server card signals, OAuth discovery, and OAuth protected-resource information.
  • Product teams evaluating agentic commerce readiness: Run the scan to see whether commerce-related standards referenced by the tool (x402, UCP, ACP) are present and can be discovered by agents.
  • Iterative improvement workflow: Use the “improve the score” guidance to apply targeted changes, then scan again to confirm the site’s agent readiness improves.

FAQ

What does the scan measure?

It checks multiple categories of agent readiness—discoverability, content accessibility, bot access control, protocol/API/auth discovery, and commerce-related compatibility—using standards and signals listed in the tool’s checks.

Do I have to run every check?

No. The scan interface supports customizing which checks to run for a given website.

What are the easiest improvements to make?

The tool highlights two starting points: publish a valid robots.txt with AI bot rules and sitemap directives, and ensure your homepage exposes useful discovery headers or metadata.

Does the tool focus only on browsing access, or also APIs and auth?

Both. The scan includes checks for AI bot rules and content signals, and it also covers API/auth/protocol discovery items such as MCP server card, Agent Skills, WebMCP, and OAuth discovery/protected-resource discovery.

Where can I learn more about building agents that interact with websites?

The page points to Cloudflare Agents documentation for learning how to build and deploy AI agents that can browse, interact, and transact on the web.

Alternatives

  • Manual standards audits (robots.txt, sitemap, headers): Instead of an automated readiness scan, teams can independently verify robots.txt, sitemap presence, and header/metadata patterns; this offers more control but less structured guidance.
  • Other agent-readiness or web compatibility checkers: Tools in the same category can focus on different subsets of agent compatibility (e.g., mostly content discovery and bot access, or mostly API/auth discovery), which may be simpler if you only need one dimension.
  • Developer-led protocol integration verification: If your priority is protocol support (e.g., OAuth discovery/protected-resource patterns or MCP-style interfaces), you can validate those artifacts directly in your application stack rather than using an end-to-end site scanner.