ora // blog
How ready is the web for agents?
What we're learning from scanning thousands of products for how well AI agents can find, understand, and use them.
Deep Scan v1.1: every layer just got deeper
v1 ran real agents at every layer. v1.1 makes those agents work harder. Discovery is now a true AEO/GEO benchmark across answer engines. MCP gets graded on Anthropic's own best-practice guidelines, not just whether the endpoint responds. Some scores will drop. That's the point.
The state of agent readiness
Agents are the new customers. We scored thousands of products on whether agents can actually work on their sites. 99% aren't ready. Here's what the 1% are doing differently.
Introducing Deep Scan: a benchmark, not a checklist
Static scanners can tell you whether your site serves the right files. They can't tell you whether an agent picks you. Deep Scan spawns real agents at every layer of the AgentReady standard, from open-web discovery to multi-turn task completion, and grades on what they actually did.
Introducing AgentReady: the first open standard for agent readiness
The agentic web has a dozen protocols and no shared way to name what a product should implement. AgentReady is the first open standard for agent readiness: vendor-neutral, MIT-licensed, and deliberately separate from how anyone scores against it.