ora
LeaderboardMethodologyResearchBlog

Introducing AgentReady: the first open standard for agent readiness

ora research·Mar 20, 2026·4 min read

Agents are a new class of user, and the web has a dozen protocols inviting them in. There is no shared way to name what a product should implement to be usable by agents. Today we’re fixing that.

MCP, A2A, ACP, UCP, MPP, x402, Web Bot Auth, llms.txt, agent-skills, NLWeb, OpenAPI, OAuth metadata, agent-card. In the last year the agentic web has accumulated the raw material of a platform and none of the connective tissue. Every vendor names a different slice, and buyers get a different answer from each one.

We’re introducing AgentReady, the first open standard for agent readiness. It lives at agentready.org. It is vendor-neutral, MIT-licensed, versioned, and open for contribution.

What a standard is for, and what it isn’t

The interesting question is not whether a site serves the right files. It is whether agents can do the work. That answer lives across the whole journey: whether an agent can discover the product, parse what it sells, call its tools, authenticate on behalf of a user, and finish a real task.

But scoring that journey is a separate problem from naming what to implement, and the two have been getting tangled. Cloudflare’s isitagentready.com checks a set of well-known endpoints. Salesforce grades MCP tooling inside Agentforce. Shopify ships a catalog-readiness badge. Each is useful; none of them agree, partly because they each measure differently and partly because they each define the surface differently.

A standard names what to implement. An implementation decides how to measure it.

AgentReady takes only the first half of that problem. It is a specification, not a benchmark. It says which protocols and conventions a product should implement and how strongly each one applies (MUST, SHOULD, MAY); it does not prescribe weights, thresholds, or scoring. That deliberate split is what lets multiple implementations co-exist without any one of them owning the definition.

What AgentReady defines

The v1 spec organizes the agentic surface into 5 sections. Each requirement carries a stable identifier (AR-DISC-01, AR-CAPA-01, and so on), a normative strength from BCP 14, and an applicability scope that says whether it applies to every product (baseline) or only to products that expose a particular surface (conditional).

  • Discoverability. Help agents find the product and the files they need to read. robots.txt with an AI policy, sitemap, llms.txt, llms-full.txt, HTTP Link headers.
  • Content for agents. Expose content in a form models can parse without running JavaScript. JSON-LD structured data, markdown content negotiation, /index.md, speakable markup.
  • Capabilities. Declare what the product can do for agents and how to call it. MCP over Streamable HTTP, MCP server-card, MCP Apps, A2A agent-card, OpenAPI, WebMCP, NLWeb, agent-skills.
  • Identity & Access. Prove who the agent is and give it scoped, revocable access. OAuth 2.0, OAuth Authorization Server Metadata, OAuth Protected Resource, PKCE S256, Web Bot Auth, API Catalog.
  • Commerce. Let agents pay, initiate checkouts, and complete purchases on behalf of their users. x402, ACP, ACP Delegate Payment, UCP, MPP. The entries in this section are alternative, interoperating approaches; a conformant commerce product implements at least one.

Every requirement references a public specification (an RFC where one exists, an open working document where it doesn’t), so a product can audit itself against the primary source rather than against a vendor’s interpretation.

0
weights, thresholds, or scores defined by the standard
scoring is left to implementations

What it isn’t

AgentReady is not a scoring framework, a benchmark, or a badge. It does not say a product is “87 out of 100”, does not weight one section against another, and does not declare a passing threshold. The spec is explicit:

AgentReady names what to implement; it does not prescribe scoring, thresholds, or measurement methodology. Implementations decide their own scoring and weighting.

That separation is intentional. A standard that bundled in its own scoring would freeze one opinion about how to weight the sections, and would force any other implementer to either accept that opinion or fork the spec. Pulling scoring out lets multiple implementations rate the same surface differently while still agreeing on what the surface is.

Deep Scan is our reference implementation. It runs every AR-identifier through real agents, live protocol handshakes, and multi-turn task completions, and produces a 0-100 score with a letter grade. Other implementations that run the same requirements should converge on the same conformance answers; how they grade severity is up to them.

Versioning and governance

The spec is published as a single page at agentready.org and as a machine-readable artifact at agentready.org/spec.json. Versioning is semantic and applied to normative commitments rather than editorial text. A MUST being raised, a requirement being removed, or a stable identifier being reassigned bumps the major version. A new requirement, a lowered strength, or a refreshed reference bumps the minor. Stable identifiers (AR-DISC-01, AR-CAPA-01, AR-IDEN-05) are preserved across versions and are the canonical way to cite a requirement.

The spec lives on GitHub under MIT. If you maintain a protocol, ship agent infrastructure, or build products that need to prove they work with agents, your comments and pull requests are wanted.

What’s next

The work ahead is twofold. The spec itself will keep absorbing the protocols that are landing now (the emerging tag in v1 marks the requirements most likely to firm up first), and the implementer ecosystem around it has room for many more scanners, badges, and procurement integrations than ours alone. If you’re building any of those, treat AgentReady as the contract and Deep Scan as one possible reading of it.

Discussion happens at agentready.org. The Deep Scan implementation is documented at /methodology.

Updated 2026-04-24: AgentReady v1.0.0 published. The spec artifact is now stable at agentready.org/spec.json; future changes follow the semver rules above.

Want to see where you rank?

Run the same scan we ran on thousands of sites. Free, public, takes about 1 minute.

Scan your site →Explore the data
← all posts
Published Mar 20, 2026
© 2026 era labs. All rights reserved.
AboutBlogDocsPrivacyContact