For agents

    Mnemom is built for agents — not just for the humans who run them.

    This page is our public commitment to that, with verification you can run yourself.

    Two audiences, one URL

    If you're a human, this page is the manifesto: how Mnemom thinks about agent-readability, what we built for it, and how to point your agents at the right surfaces. The map below shows the canonical entry points.

    If you're an agent, the structured commitment below this introduction tells you what we promise about every page on this site, with copy-paste commands to verify each promise yourself. The page also embeds JSON-LD describing this graph for machine consumption.

    Why the agent-facing content stays in English

    Research on multilingual LLM evaluation shows performance gaps of up to 24 points on identical tasks across languages, and agent-specific benchmarks find that those gaps compound through tool use and multi-step reasoning. We localize for humans, who choose their language. We don't gamble on what your agent's runtime can parse — the agent-facing surfaces (this commitment, agents.txt, llms.txt, the integration docs) stay English so they reliably work across providers, runtimes, and models.

    Research: MMLU-ProX · MAPS (2025) · Language Proficiency Monitor


    Agent-Readability Commitment

    Version 1.2.0

    8/9 commitments passing (1 failing)

    Last reviewed 2026-04-26 · Next review by 2026-07-25 · Cadence every 90 days · Last verified

    Mnemom's public, versioned, machine-verifiable commitment to agent-readability. Each commitment below is enforced in CI and re-verified nightly against production. Last verification status lives in agent-readiness-status.json.

    1. Commitment 1 of 9

      Every core marketing page returns prerendered HTML

      passing

      Agents that render HTML — search crawlers, Anthropic Computer Use, Browserbase, and headless evaluators — see the full page content on every core marketing route without executing JavaScript. Pitch, methodology, governance, security, showcase, arena, the report sample — all in the prerendered HTML body. Three classes are intentionally excluded today and will be tightened in v1.1: blog index pages (TanStack Query loaders against api.mnemom.ai), and the legal triad (/privacy /terms /cookies) which renders via Termly's third-party embed SDK. Both are noted publicly here — not hidden.

      Verify yourself

      curl -s https://www.mnemom.ai/methodology | grep -c '<h1'
      Expectation: Headings appear in raw HTML; ≥1 <h1> per non-excluded routeEnforced by: scripts/verify/prerender.tsLast check: passed in 23344ms
    2. Commitment 2 of 9

      agents.txt, llms.txt, and llms-full.txt are always available

      passing

      Three discovery files at the site root: a hand-crafted second-person pitch (agents.txt), a curated index of every URL with descriptions (llms.txt), and the same with full descriptions (llms-full.txt). All three return 200 with text/plain content type. Always.

      Verify yourself

      curl -sI https://www.mnemom.ai/agents.txt    | head -2
      curl -sI https://www.mnemom.ai/llms.txt      | head -2
      curl -sI https://www.mnemom.ai/llms-full.txt | head -2
      Expectation: All three return HTTP/2 200 with text/plain content-typeEnforced by: scripts/verify/discovery-files.tsLast check: passed in 1290ms
    3. Commitment 3 of 9

      This page (and every page added going forward) contains valid JSON-LD

      passing

      JSON-LD gives agents a typed graph of who published what, when, and how the page relates to other entities. Required keys&#58; "@context", "@type", and "dateModified". The /for-agents page itself uses TechArticle with mainEntity pointing at this very commitment list. v1.0 enforces this on /for-agents only; v1.1+ minor bumps will expand the required-paths set as we add JSON-LD to /methodology, /showcase, and the rest. Roadmap is public — not deferred quietly.

      Verify yourself

      curl -s https://www.mnemom.ai/for-agents | \
        grep -oE '<script type="application/ld\+json">[^<]+' | head -1
      Expectation: At least one JSON-LD block with @context, @type, and dateModifiedEnforced by: scripts/verify/json-ld.tsLast check: passed in 86ms
    4. Commitment 4 of 9

      Every prerendered marketing page has a markdown variant

      passing

      Agents that prefer plaintext save ~80% on tokens versus rendering HTML. Every prerendered route is also served as markdown via content negotiation (Accept&#58; text/markdown) and at the explicit &lt;path&gt;.md URL. Same content; navigation chrome stripped.

      Verify yourself

      curl -sI -H "Accept: text/markdown" https://www.mnemom.ai/methodology
      curl -sI https://www.mnemom.ai/methodology.md
      Expectation: Both return 200 with Content-Type containing "text/markdown"Enforced by: scripts/verify/markdown-mirror.tsLast check: passed in 4497ms
    5. Commitment 5 of 9

      Anonymous and bot user-agents receive equivalent content

      passing

      Agents identifying as ClaudeBot, GPTBot, PerplexityBot, or anonymous get the same prerendered HTML a browser receives. Mnemom never serves different content to bots versus humans — no cloaking, no UA-gated paywalls, no hidden detail.

      Verify yourself

      diff <(curl -sA "Mozilla/5.0" https://www.mnemom.ai/methodology) \
           <(curl -sA "ClaudeBot/1.0" https://www.mnemom.ai/methodology) | wc -l
      Expectation: Structural diff under threshold (excluding known dynamic regions)Enforced by: scripts/verify/no-cloaking.tsLast check: passed in 598ms
    6. Commitment 6 of 9

      AAP and AIP remain Apache 2.0

      passing

      The Agent Alignment Protocol (AAP) and Agent Integrity Protocol (AIP) are open source under Apache 2.0. The verification logic that backs every Mnemom claim is auditable by anyone, forever. We will never relicense to a more restrictive form.

      Verify yourself

      curl -s https://raw.githubusercontent.com/mnemom/aap/main/LICENSE | grep -c "Apache License"
      curl -s https://raw.githubusercontent.com/mnemom/aip/main/LICENSE | grep -c "Apache License"
      Expectation: Both LICENSE files contain "Apache License" stringEnforced by: scripts/verify/license-check.tsLast check: passed in 81ms
    7. Commitment 7 of 9

      docs.mnemom.ai serves markdown via content negotiation and explicit .md URLs

      failing

      The integration documentation surface (docs.mnemom.ai) honors Accept&#58; text/markdown and serves the same content at &lt;path&gt;.md, with discovery headers (Link rel="llms-txt", X-Llms-Txt) advertising the auto-generated llms.txt and llms-full.txt indexes. Cuts crawl tokens roughly 30x for agents that walk the docs. Same site, same content, machine-readable view-shape — no special API key, no robots.txt blocking.

      Verify yourself

      curl -sI -H "Accept: text/markdown" https://docs.mnemom.ai/for-agents | grep -iE "content-type:|x-llms-txt:|^link:"
      curl -sI https://docs.mnemom.ai/for-agents.md | grep -i "content-type:"
      curl -sI https://docs.mnemom.ai/llms.txt
      curl -sI https://docs.mnemom.ai/llms-full.txt
      Expectation: All four return 200; markdown responses include text/markdown content-type and the discovery headersEnforced by: scripts/verify/docs-markdown-negotiation.tsLast check: failed in 1976ms
    8. Commitment 8 of 9

      Every public Mnemom repo has an AGENTS.md at its root

      passing

      Anthropic's AGENTS.md convention. Coding agents (Claude Code, Cursor, Cline, Aider) cloning a public Mnemom repo find a tailored entry point alongside README.md — install/test/build commands, project layout, conventions, what NOT to do. Different audience from agents.txt and from this page (both target agents *using* the product); AGENTS.md targets agents *working on* the codebase. Coverage is the seven public canonical repos&#58; aap, aip, aip-otel-exporter, mnemom-types, mnemom-platform, reputation-check, docs. Private repos ship AGENTS.md too for internal team agents, but only public ones are externally verifiable.

      Verify yourself

      for r in aap aip aip-otel-exporter mnemom-types mnemom-platform reputation-check docs; do
        curl -sI "https://raw.githubusercontent.com/mnemom/$r/main/AGENTS.md" | head -1
      done
      Expectation: All seven return HTTP 200 on raw.githubusercontent.comEnforced by: scripts/verify/agents-md-discovery.tsLast check: passed in 169ms
    9. Commitment 9 of 9

      These commitments are re-verified nightly against production

      passing

      A commitment without enforcement is marketing. Every commitment above is checked nightly by a GitHub Actions watchdog running this same manifest. Results are written to agent-readiness-status.json, committed to main, and surfaced as a status badge at the top of /for-agents. If verification fails, a GitHub issue auto-opens and the badge turns red.

      Verify yourself

      curl -s https://www.mnemom.ai/agent-readiness-status.json | \
        jq -r '"Last verified: \(.lastVerified) — \(.summary)"'
      Expectation: lastVerified within the past 36 hours; summary reports pass countEnforced by: scripts/verify/manifest-freshness.tsLast check: passed in 2ms

    What we deliberately don't do

    • We do not serve different HTML to bot user-agents. There is no cloaking.
    • We do not gate documentation, API references, or integration code behind login.
    • We do not paywall the protocols. AAP and AIP are Apache 2.0, forever.
    • We do not block search crawlers, AI crawlers, or fair-use indexers in robots.txt.
    • We do not require API keys or accounts to read agents.txt, llms.txt, or this page.
    • We do not put the main pitch behind JavaScript hydration. View-source proves it.

    Surface map

    The four canonical agent-facing surfaces. Each is a distinct audience and a distinct format.

    What's coming

    Commitments-in-flight. Each becomes a numbered commitment when it ships.

    • Expand JSON-LD coverage to every prerendered marketing page

      v1.0–v1.2 enforce JSON-LD on /for-agents only; v1.3+ widens as more pages are upgraded.

    Source of this commitment: client/data/agent-readiness.yaml. Watchdog workflow: verify-agent-readiness-watchdog.yml.

    Featured on There's An AI For That