AI Analysis with LLM Fields
===========================
LLM prompt fields let you use AI to analyze every page on your site systematically. You
define questions (prompts) and Content Chimera sends each page's content to an AI model,
stores the answers, and makes them available as chartable data fields. This turns
unstructured page content into structured, comparable data — so you can chart, filter, and
report on things like content quality, topic classification, or audience suitability across
your entire site.
.. contents:: On this page
:local:
:depth: 2
What Are Fieldsets and Fields?
------------------------------
LLM analysis is organized into two levels: **fieldsets** and **fields**.
A **fieldset** is a group of related prompts that are run together. Think of it as a
scoring rubric or a classification scheme. For example, you might have a fieldset called
"Content Classification" or "E-E-A-T Scoring."
A **field** is an individual question within a fieldset. Each field defines a specific
prompt that will be sent to the AI model along with each page's content. For example,
within an "E-E-A-T Scoring" fieldset, you might have fields for "Expertise Score,"
"Authoritativeness Score," and so on.
Fields have a **result type** that controls what kind of answer the AI returns:
- **String** — A text answer (e.g., "Blog Post," "Product Page," "FAQ")
- **Number** — A numeric value (e.g., a score from 1 to 5)
- **Structured** — A JSON object for more complex responses
Fields can also have **constrained values** — a predefined list of allowed answers. This
works like a dropdown menu: you tell the AI it must choose from specific options (like
"Blog Post," "Product Page," "Landing Page," "Support Article"). Constraining values
makes the results more consistent and easier to chart.
Each field can be **required** or optional, and can allow **single or multiple responses**
(for example, a page might belong to more than one topic category).
Templates
---------
Content Chimera includes built-in **fieldset templates** — pre-configured sets of prompts
designed for common analysis tasks. Templates save you from writing prompts from scratch.
Available templates include:
- **E-E-A-T Scoring** — Rates each page on Google's content quality dimensions
(see :ref:`eeat-scoring` below)
- **Content Classification** — Categorizes pages by type and purpose
- **AI Readiness** — Evaluates how well content is structured for AI consumption
To use a template, copy it to your extent. The copy becomes your own version that you can
customize — change prompts, add or remove fields, or adjust the allowed values.
.. admonition:: Web UI
:class: tip
Go to **LLM Fieldsets** from the extent menu. You'll see a list of available templates
alongside any fieldsets you've already created. Click **Copy** on a template to add it
to your extent.
.. admonition:: MCP
:class: tip
Ask your AI assistant to show you the available templates and copy one to your project:
*"What LLM fieldset templates are available?"*
*"Copy the E-E-A-T Scoring template to this extent"*
.. raw:: html
Tool reference
Tool: llm-fieldsets
action_type: "list" — list templates
action_type: "copy" — copy a template
source_fieldset_id: 56 — which template to copy
extent_id: 1234
Creating Custom Fields
----------------------
If the templates don't cover what you need, you can create your own fieldsets and fields
from scratch.
When writing prompts, keep these principles in mind:
- **Be specific.** "Rate this page's expertise on a scale of 1-5" works better than
"How expert is this?"
- **Define the scale.** If you want a number, explain what each value means (e.g.,
"1 = no expertise demonstrated, 5 = deep subject-matter expertise with citations").
- **Use constrained values when possible.** Giving the AI a fixed list of options
produces more consistent, chartable results.
- **Test before bulk-running.** Always test your prompts on a few representative pages
before running them across the full site (see the next section).
.. admonition:: Web UI
:class: tip
1. Go to **LLM Fieldsets** and click **Create Fieldset**
2. Give it a name and optional instructions (context provided to the AI for all fields
in this fieldset)
3. Add fields one at a time — each with a name, prompt, result type, and optional
value constraints
4. Choose the AI model to use and whether to send the full HTML or simplified text
.. admonition:: MCP
:class: tip
Ask your AI assistant to create a fieldset and add fields to it. Describe what you want
each field to do:
*"Create an LLM fieldset called Content Classification"*
*"Add a field called Content Type that classifies each page as Blog Post, Product
Page, Landing Page, Support Article, or Other"*
Your AI assistant will handle the technical details — setting up the prompt, result
type, and allowed values.
.. raw:: html
Tool reference
Tools: llm-fieldsets and llm-fields
Result types: 1 = String, 2 = Number, 3 = Structured
Testing Before Running
-----------------------
Before running your fieldset across every page on the site, test it against a single URL.
This lets you see exactly what the AI returns and refine your prompts without waiting for
a full bulk run.
.. admonition:: Web UI
:class: tip
From the **LLM Fieldsets** page, open your fieldset and click the **Test** button.
Enter a URL and Content Chimera will fetch the page, run all fields against it, and
show you the results.
.. admonition:: MCP
:class: tip
Ask your AI assistant to test your fieldset against a sample page:
*"Test this fieldset against https://example.com/blog/sample-post"*
The response shows the AI's answer for each field, so you can verify the prompts
produce useful results before committing to a bulk run.
.. raw:: html
Tool reference
Tool: llm-fieldsets with action_type: "test_example_url"
Running Bulk Summarization
--------------------------
Once your fieldsets are configured and tested, you can run them across all pages in your
crawl. This is called **bulk summarization** — Content Chimera sends every page's content
to the AI model and stores the results.
.. important::
Bulk summarization sends each page to an AI model, which has an associated cost. You
will always be asked to confirm before the run starts. The confirmation tells you how
many pages will be processed so you can estimate the cost.
Before running, make sure your fieldsets are attached to a **summary definition** — this
tells Content Chimera which fieldsets to include in the bulk run. You can have multiple
fieldsets active at once.
.. admonition:: Web UI
:class: tip
1. Confirm your fieldsets are listed under the active **Summary Definition** in your
extent settings
2. Go to **My History** and choose **Run Summarization** from the actions menu
3. Review the confirmation prompt showing the number of pages and fieldsets
4. Click **Confirm** to start the run
.. admonition:: MCP
:class: tip
Ask your AI assistant to run bulk summarization:
*"Run LLM summarization for this extent"*
Your AI assistant will check which fieldsets are configured, show you a confirmation
with the estimated scope, and proceed only after you approve.
.. raw:: html
Tool reference
Tools: summary-definition (check configuration) and run-focused-pipeline with pipeline: "summarize"
Using the Results
-----------------
After bulk summarization completes, every field value from every page becomes a column in
your **flattened table**. This means you can use them just like any other data field in
Content Chimera:
- **Chart distributions** — How many pages scored 4 or 5 on Expertise? What percentage
are classified as "Blog Post"?
- **Filter and sort** — Show only pages with a low Trustworthiness score, or focus on
pages the AI classified as "Landing Page."
- **Use in rules** — Create rules that act on AI-generated fields (e.g., flag all pages
with an Expertise score below 3).
- **Include in reports** — Add charts based on LLM fields to your Chimera reports.
- **Cross-reference with other data** — Combine AI scores with analytics data, crawl
depth, or word count for deeper analysis.
.. admonition:: Chimera Chat
:class: tip
Once summarization is complete, you can ask questions about the results:
- *"What's the average Expertise score across the site?"*
- *"Show me a chart of Content Type distribution"*
- *"Which pages scored below 3 on Trustworthiness?"*
.. admonition:: MCP
:class: tip
Ask your AI assistant to query or chart the results, just as you would with any other
data field. For example:
*"Show me a chart of Content Type distribution"*
*"What fields are available from the LLM analysis?"*
.. _eeat-scoring:
E-E-A-T Scoring
----------------
E-E-A-T stands for **Experience, Expertise, Authoritativeness, and Trustworthiness**. It
is the framework Google uses to evaluate content quality. Pages that score well on these
dimensions tend to perform better in search results, especially for topics where accuracy
matters (health, finance, legal, etc.).
The four dimensions:
- **Experience** — Does the content demonstrate first-hand experience with the topic?
- **Expertise** — Does the author show deep knowledge and competence?
- **Authoritativeness** — Is the author or site recognized as a go-to source on this
topic?
- **Trustworthiness** — Is the content accurate, transparent, and honest?
Content Chimera includes a **built-in E-E-A-T fieldset template** that scores each page
on all four dimensions using a 1-5 scale. The AI reads each page and assigns scores based
on signals in the content — things like cited sources, author credentials, specific
examples, and balanced presentation.
**How to use it:**
1. Copy the E-E-A-T template to your extent (see the Templates section above)
2. Optionally customize the prompts or scale
3. Test against a few representative pages
4. Run bulk summarization
5. Chart the results to identify content quality gaps — for example, pages with high
Expertise but low Trustworthiness might need better source citations
E-E-A-T scoring is particularly useful for:
- **Content audits** — Finding pages that fall below a quality threshold
- **Prioritizing improvements** — Focusing editorial effort where scores are lowest
- **Client reporting** — Showing stakeholders a data-driven view of content quality
- **AI Readiness assessments** — E-E-A-T scores are one component of the broader AI
Readiness pipeline