AI Readiness Reports

An AI Readiness report assesses how well a website’s content is structured for AI consumption — including search engines, chatbots, and large language models. As AI increasingly mediates how people find and interact with content, these reports help you understand where a site stands and what needs to improve.

The Five Dimensions

The AI Readiness report evaluates each page across five dimensions. Together, they paint a picture of how “machine-readable” and “AI-friendly” the site’s content is.

JSON-LD / Schema.org

Structured data markup tells search engines and AI systems what a page is about in a machine-readable format. The report checks whether pages include JSON-LD markup using Schema.org vocabulary and evaluates the quality and completeness of that markup.

Pages with good structured data are more likely to appear in rich search results and to be accurately understood by AI systems.

Semantic HTML

Semantic HTML refers to the proper use of landmark elements like <main>, <article>, <nav>, <header>, and <footer>. These elements give AI systems clear signals about which parts of a page contain the primary content versus navigation, sidebars, or boilerplate.

Sites that rely heavily on generic <div> elements make it harder for AI to extract the meaningful content.

Heading Structure

A valid heading hierarchy — H1 followed by H2, H3, and so on without skipping levels — helps both humans and machines understand how content is organized. The report checks whether pages follow a logical heading structure and flags common issues like missing H1 tags or headings that jump from H1 directly to H4.

FAQ Detection

FAQ sections are a specific content pattern that AI systems look for, especially for generating direct answers. The report detects whether pages contain FAQ sections and counts the number of questions found. Pages with well-structured FAQs are more likely to surface in AI-generated answers.

E-E-A-T Scores

Experience, Expertise, Authoritativeness, and Trustworthiness — scored on a 1 to 5 scale by AI analysis. These scores evaluate the content quality signals on each page:

  • Experience — Does the content show first-hand experience with the topic?

  • Expertise — Does the author demonstrate deep knowledge?

  • Authoritativeness — Is the author or site recognized as authoritative on this topic?

  • Trustworthiness — Is the content accurate, transparent, and honest?

E-E-A-T scores are produced by running the content through an AI model with carefully designed prompts. The resulting scores are deterministic outputs stored in the database — they are not re-generated or fabricated when the report is assembled.

Note

For more detail on how E-E-A-T scoring works and how to customize it, see the E-E-A-T Scoring section in AI Analysis with LLM Fields.

How to Generate an AI Readiness Report

AI Readiness reports are currently available through the MCP interface. The pipeline handles everything — crawling, extraction, AI scoring, and report generation — in a single run.

MCP

Ask your AI assistant to run an AI Readiness assessment:

“Run an AI Readiness report for https://example.com”

If the site has already been crawled and you want to regenerate the report without re-crawling:

“Re-run the AI Readiness report for this extent, skip the crawl”

Your AI assistant will ask for confirmation before proceeding, because the pipeline includes LLM analysis steps with associated cost.

Tool reference

Tool: run-long-pipeline with pipeline: "ai-readiness"

Important

The AI Readiness pipeline always requires confirmation. The confirmation prompt tells you how many pages will be analyzed so you can assess the cost before proceeding.

The Automated Report

When the pipeline completes, it produces a multi-section report that includes:

  • Overall AI Readiness summary — A high-level assessment of the site’s readiness.

  • Dimension-by-dimension breakdown — Charts and analysis for each of the five dimensions, showing how the site performs and where gaps exist.

  • E-E-A-T score distributions — Charts showing how pages score across Experience, Expertise, Authoritativeness, and Trustworthiness, with breakdowns by folder or content area.

  • Actionable recommendations — Specific suggestions for improving AI readiness, prioritized by impact.

The E-E-A-T scores in the report are pulled directly from the stored analysis results. They are the same scores you would see if you queried the flattened table — the report simply presents them in context with supporting narrative.

Like all Content Chimera reports, the generated report is fully editable. You can add your own analysis, remove sections, or incorporate additional charts before sharing with a client.

Tracking Progress

MCP

Ask your AI assistant to check on progress:

“What’s the status of my AI Readiness report?”

Tool reference

Tool: job-status with the job_chain_id returned when the pipeline started.

When complete, the report appears in the Reports section of your extent.