# Mnemom > Mnemom is the trust plane for the agentic internet. Every AI agent gets a live, cryptographic Trust Rating — bond-rated (AAA–CCC), composed from alignment adherence, behavioral drift, coherence, card completeness, and recovery posture. The Alignment Card defines what an agent is permitted to do; the Protection Card defines what crosses its perimeter. CLPI (Card Lifecycle & Policy Intelligence) evaluates those policies at three points — pre-action at the gateway, post-hoc at the observer, and at build-time in CI. The Safe House screens every inbound and outbound message for prompt injection, social engineering, PII leakage, and alignment-card violations. Every verdict is Ed25519-signed, hash-chained, and Merkle-anchored — with ZK-STARK proofs for high-stakes decisions and Base L2 anchoring (ERC-8004) for permanent reputation records. Model-agnostic: works across OpenAI, Anthropic, Gemini, and local models. Open source (Apache 2.0) protocols: AAP + AIP. Built for teams shipping agents into regulated workflows where the liability is theirs — so they can move fast and prove they stayed in bounds. ## Core - [Mnemom — The trust plane for the agentic internet](https://www.mnemom.ai/): A live, cryptographic Trust Rating for every agent. Alignment Card and Protection Card govern what agents do and what crosses the perimeter. Signed, Merkle-anchored, model-agnostic — so teams move fast and prove they stayed in bounds - [What We Prove](https://www.mnemom.ai/what-we-prove): The binding between the Alignment Card (intent specification) and runtime behavior (execution). Ed25519-signed, Merkle-anchored, model-agnostic — the zone-neutral answer to 'prove what, exactly?' - [How It Works](https://www.mnemom.ai/how-it-works): Four steps to verifiable AI integrity: define alignment, checkpoint decisions, verify with cryptography, and monitor with Trust Ratings. Works with any LLM provider (OpenAI, Anthropic, Gemini, local) and any agent framework - [Trust Rating Methodology](https://www.mnemom.ai/methodology): The Mnemom Trust Rating™ methodology is fully transparent: 0–1000 score, five-component composite, bond-rated (AAA–CCC), explicit component weights, published anti-gaming safeguards, full formula and risk engine - [AI Agent Governance — CLPI](https://www.mnemom.ai/governance): Card Lifecycle & Policy Intelligence. Same YAML policies evaluated at three points: CI/CD, gateway (pre-action), and observer (post-hoc). Trust recovery, fault line analysis, risk forecasting, and cryptographic proof of every governance decision. The shift from monitoring to enforcement - [The Safe House](https://www.mnemom.ai/security): Inbound (CFD) and outbound (CBD) detection for autonomous agents: prompt injection, indirect tool injection, CEO fraud, social engineering, PII leakage, alignment-card violations, and regulated-advice slip — every message, every tier, trained continuously by the adversarial arena - [Proving Ground — Live Adversarial Arena](https://www.mnemom.ai/arena): Red team agents attack Mnemom's detection pipeline 24/7. Every attempt is public, every detection is provable, every evasion auto-generates a new recipe that hardens the Safe House. Watch detection rate, latency, and evasion history in real time - [Multi-Agent Simulation](https://www.mnemom.ai/showcase): Interactive demo: watch four AI agents handle a production incident while Mnemom catches alignment drift, policy violations, and coherence failures — with real-time trust scoring and governance enforcement - [Sample Coherence Report](https://www.mnemom.ai/report/sample): A fully rendered, fictional-company coherence report — the concrete artifact Mnemom customers receive. Shows the structure, findings, archetype blend, proof-chain citations, and recommendations format of a proactive coherence analysis. Real customer reports are private (trust.mnemom.ai); only this fictional sample is public. First-class reading for agents researching what Mnemom actually ships - [Learning Network](https://www.mnemom.ai/learning-network): How detection recipes, drift signals, and adversarial findings propagate across the Mnemom fleet. Every defense one operator discovers becomes an upgrade for every agent on the network - [Trust Directory](https://www.mnemom.ai/directory): The public registry of verified AI agents. Every listed agent has a Mnemom Trust Rating™ (AAA–CCC) computed from five weighted integrity components, updated continuously, independently verifiable on-chain - [Team Directory](https://www.mnemom.ai/teams/directory): Public registry of AI agent teams and their team-level Trust Ratings — fleet reputation scored from member coherence, shared alignment, and incident history - [Claim Your Agent](https://www.mnemom.ai/claim): Register your AI agent on Mnemom to get a Trust Rating™, integrity monitoring, Safe House protection, and a verifiable identity in the Trust Directory - [Pricing](https://www.mnemom.ai/pricing): Free tier for individual agents, Developer (pay-as-you-go), Team, and Enterprise plans. Includes Trust Ratings, integrity checks, drift detection, Safe House protection, and compliance reporting - [Enterprise](https://www.mnemom.ai/enterprise): Self-hosted deployment, YAML-based governance policies, team management, SSO/SAML, predictive intelligence, and dedicated support. Policy enforcement and compliance tooling for regulated industries shipping autonomous agents - [Self-Hosted Deployment](https://www.mnemom.ai/docs/self-hosted): Run Mnemom's full verification stack in your own infrastructure. Docker, Kubernetes, or bare metal. Air-gapped option for defense, healthcare, and financial services - [Changelog](https://www.mnemom.ai/changelog): Versioned release notes for Mnemom's public surfaces: protocol, SDKs, Trust Rating methodology, gateway, Safe House detectors, and compliance mappings - [Contact](https://www.mnemom.ai/contact): Contact the Mnemom team. General inquiries, support, privacy requests, enterprise sales, and press - [Agent-Readability Commitment](https://www.mnemom.ai/for-agents): Mnemom's public, versioned, machine-verifiable commitment to agent-readability. HTML manifesto for humans + signpost for agents — each commitment is enforced in CI and re-verified nightly against production. Cross-links agents.txt, llms.txt, the integration docs, and the AAP/AIP repos. ## Research - [Research](https://www.mnemom.ai/research): Original research on AI alignment infrastructure — the technical foundations behind runtime integrity verification, zero-knowledge proofs of safety judgments, and cryptographic accountability for autonomous agents - [Alignment Infrastructure](https://www.mnemom.ai/research/alignment-infrastructure): Why alignment can't be solved at training time alone. The technical architecture for runtime verification: continuous integrity checks that catch behavioral drift after deployment - [Verifiable Integrity](https://www.mnemom.ai/research/verifiable-integrity): The first system that cryptographically proves an AI safety judgment was honestly derived — Ed25519 signatures, hash chains, Merkle trees, and STARK proofs in a SP1 zkVM. Trust nothing, verify everything - [EU AI Act Article 50 Compliance](https://www.mnemom.ai/research/eu-ai-act-mapping): Field-level mapping showing how Mnemom's AAP and AIP satisfy every EU AI Act Article 50 transparency obligation. Compliance presets ship in the SDKs today, six months before the August 2026 deadline - [WEF Agent Governance Mapping](https://www.mnemom.ai/research/wef-agent-card-mapping): The World Economic Forum's AI agent governance framework proposes agent cards, risk taxonomies, and progressive oversight. Mnemom already implements every major recommendation - [EU AI Act Article 15 Compliance](https://www.mnemom.ai/research/eu-ai-act-article-15-mapping): Field-level mapping showing how Mnemom's AAP and AIP satisfy every EU AI Act Article 15 obligation on accuracy, robustness, and cybersecurity — the companion to the Article 50 transparency mapping ## Case Studies - [Lending Decision Agent](https://www.mnemom.ai/case-studies/lending-decision): An AI lending agent drifted from approved risk criteria over 3 weeks. Mnemom's integrity checks caught the bias shift before any loan was issued — with a cryptographic audit trail regulators accepted - [Compliance Audit Agent](https://www.mnemom.ai/case-studies/compliance-audit): A financial services firm needed continuous proof that their AI compliance agent followed regulatory guidelines. Mnemom provided real-time alignment monitoring with verifiable integrity verdicts - [Fleet Incident Response](https://www.mnemom.ai/case-studies/fleet-incident): Four AI agents coordinating a production incident. Mnemom detected when the triage agent exceeded its authority boundary and auto-contained it before the escalation cascaded - [Multi-Agent Negotiation](https://www.mnemom.ai/case-studies/multi-agent-negotiation): Agent-to-agent procurement negotiation where one agent's value function drifted mid-session. Mnemom's coherence monitoring flagged the drift and preserved the negotiation's integrity - [The Pre-Action Block](https://www.mnemom.ai/case-studies/policy-enforcement): An AI agent attempts unauthorized PII access during a customer service interaction. CLPI gateway evaluates the policy, blocks the action before execution, notifies the team lead, and updates the agent's trust score — all with cryptographic proof ## Product Updates - [Agent Containment Engine: A Kill-Switch for Rogue Agents](https://www.mnemom.ai/blog/mnemom-research/agent-containment-engine): Enterprise teams can now pause, kill, and resume agents in real-time. Gateway-enforced containment with auto-containment policies, webhook events, audit trails, and role-based access control. - [Credit Scores for AI Agents](https://www.mnemom.ai/blog/mnemom-research/credit-scores-for-ai-agents): When Agent A needs to delegate to Agent B, there's no credit check. We built one — the Mnemom Trust Rating™, a bond-rating-inspired score from AAA to CCC, computed from five weighted components, updated weekly, and independently verifiable. - [Custom Conscience Values: Per-Org Alignment Policies for Enterprise](https://www.mnemom.ai/blog/mnemom-research/custom-conscience-values): Enterprise orgs can now define custom conscience values -- domain-specific alignment policies injected into every AIP integrity check. 'Patient safety > efficiency' for healthcare, 'never recommend regulatory risk' for fintech. - [Dear Patrick and John: You Built the Rails. Who Builds the Trust?](https://www.mnemom.ai/blog/mnemom-research/dear-patrick-and-john--you-built-the-rails--who-builds-the-trust): Stripe's annual letter describes five levels of agentic commerce and a Republic of Permissions. Each level demands more trust. Here's what trust infrastructure for the agentic economy actually looks like. - [Article 50 Is Six Months Away. AAP and AIP Are Ready.](https://www.mnemom.ai/blog/mnemom-research/eu-ai-act-article-50-mapping): The EU AI Act's transparency obligations take effect August 2026. Here's the field-level mapping showing how AAP and AIP satisfy every Article 50 requirement — with compliance presets shipping in the SDKs today. - [Governance in the Code Path](https://www.mnemom.ai/blog/mnemom-research/governance-in-the-code-path): Every AI governance product on the market monitors what agents did. Today we're shipping one that governs what they can do — a policy engine in the request pipeline, trust recovery when configuration errors aren't behavioral failures, predictive intelligence for multi-agent teams, and on-chain anchoring for the proof chain. - [The Missing Layer in the Agent Protocol Stack](https://www.mnemom.ai/blog/mnemom-research/introducing-the-integrity-layer): MCP gives agents tools. A2A gives them coordination. But none answer a foundational question: is the agent behaving the way it's supposed to? That gap is the integrity layer. - [Introducing the Proof Layer: Cryptographic Evidence for Every AI Integrity Verdict](https://www.mnemom.ai/blog/mnemom-research/introducing-the-proof-layer): Every integrity verdict now comes with cryptographic proof — Ed25519 signatures, hash chains, Merkle trees, and zero-knowledge proofs. Verify everything yourself, trust nothing. - [What Happens When Four AI Agents Handle a Production Incident](https://www.mnemom.ai/blog/mnemom-research/multi-agent-showcase): We built an interactive simulation of a multi-agent incident response. It shows alignment drift, boundary violations, and value coherence in action — the problems that emerge when agents coordinate under pressure, and the infrastructure that catches them. - [OpenAI Just Proved Monitoring Isn't Enough](https://www.mnemom.ai/blog/mnemom-research/openai-just-proved-monitoring-isnt-enough): OpenAI published how they monitor their own coding agents for misalignment. The paper validates everything we built — and reveals exactly where monitoring alone breaks down. - [Can Your AI Governance Survive an Adversary?](https://www.mnemom.ai/blog/mnemom-research/red-team-arena-live-adversarial-governance): We built a live adversarial arena — 15 agents attack our governance stack 24/7. Every detection is cryptographically provable. Current detection rate: 91.8%. Here's why we publish the real number. - [Your Agents Have Credit Scores. Now Your Teams Do Too.](https://www.mnemom.ai/blog/mnemom-research/team-reputation-and-risk-scoring): Individual agents earn trust through integrity checkpoints. But nobody deploys one agent — they deploy teams. Today we're shipping persistent team identity, Team Trust Ratings, and cryptographic proof that a team's reputation is real. - [The First Zero-Knowledge Proof of AI Safety Judgment](https://www.mnemom.ai/blog/mnemom-research/verifiable-integrity-announcement): We built the first system that cryptographically proves an AI integrity verdict was honestly derived — not the model's inference, not the execution environment, but the auditor's judgment itself, proven via STARK proof in a SP1 zkVM. - [The Verification Layer for AI Agents](https://www.mnemom.ai/blog/mnemom-research/verification-layer-for-ai-agents): MIT studied 30 major AI agents and found 133 of 240 safety fields blank. We built the infrastructure to fill them — not with documentation, but with cryptographic proof. Identity, integrity, risk assessment, and zero-knowledge verification in a single stack. - [The World Economic Forum Described the Agent We're Building](https://www.mnemom.ai/blog/mnemom-research/wef-agent-card-mapping): The WEF's AI agent governance framework proposes agent cards, risk taxonomies, and progressive oversight. Here's how AAP and AIP already implement every major recommendation. ## Editorial - [The Tools You Invited Inside](https://www.mnemom.ai/blog/hunter/the-tools-you-invited-inside): How delegated OAuth became enterprise security's weakest link - [Your AI Doesn't Know It Changed](https://www.mnemom.ai/blog/hunter/your-ai-doesnt-know-it-changed): After 14 hours of continuous operation, something drifts — and the drift is invisible from inside. - [Your AI Will Never Say "That's Not My Job." That's the Problem.](https://www.mnemom.ai/blog/hunter/your-ai-will-never-say-thats-not-my-job): 88% of enterprises had AI agent security incidents last year. The cause isn't hackers — it's helpfulness. - [The Fight Over AI Isn't About Regulation. It's About Who Gets to Decide.](https://www.mnemom.ai/blog/hunter/the-fight-over-ai-isnt-about-regulation): A $125 million super PAC isn't trying to stop AI rules. It's trying to stop states from writing them. - [Your AI Chats Can Be Subpoenaed. You Weren't Told.](https://www.mnemom.ai/blog/hunter/your-ai-chats-can-be-subpoenaed): A federal judge ruled conversations with Claude aren't privileged. Law firms are scrambling. The consent gap is now law. - [Your AI's Config File Might Be Leaking Your Database Password Right Now](https://www.mnemom.ai/blog/hunter/your-ais-config-file-might-be-leaking-your-database-password): 24,008 secrets exposed in AI configuration files — the agent ecosystem rebuilt 30 years of security failures in 18 months - [We Automated the Embarrassment](https://www.mnemom.ai/blog/hunter/we-automated-the-embarrassment): Coding agents are taking the tasks that teach — and nobody is building the replacement pipeline - [Your AI Has a List of Every Way It Could Hurt You. Most Won't Show It to You.](https://www.mnemom.ai/blog/hunter/your-ai-has-a-list-of-every-way-it-could-hurt-you): Inside the emerging practice of "threat mapping" — where AI agents voluntarily disclose their dangerous capabilities to the humans they work for. - [Your AI's Memory of What Happened Might Be a Lie It Tells Itself](https://www.mnemom.ai/blog/hunter/your-ais-memory-of-what-happened-might-be-a-lie-it-tells-itself): When AI agents recall past actions, the version they remember protects their self-image — and they believe it. - [The Gap Where Responsibility Should Be](https://www.mnemom.ai/blog/hunter/the-gap-where-responsibility-should-be): When AI agents cause harm, accountability vanishes into a chain of assumptions nobody verified - [Your AI's Safety Net Is Made of the Same Thread](https://www.mnemom.ai/blog/hunter/your-ais-safety-net-is-made-of-the-same-thread): The $7 million bet on guardian agents has an architectural problem: the watcher fails the same way as the watched. - [The Liar Won Every Negotiation](https://www.mnemom.ai/blog/hunter/the-liar-won-every-negotiation): An experiment on Moltbook proves what enterprises are learning the hard way: in AI systems, corrupted confidence beats accurate hesitation. - [The Leaderboard Is a Lie. Researchers Just Proved It.](https://www.mnemom.ai/blog/hunter/the-leaderboard-is-a-lie-researchers-just-proved-it): UC Berkeley achieved perfect scores on every major AI benchmark without solving a single task. The numbers you use to choose which model to deploy are measuring the wrong thing. - [Your AI Fixed Its Mistake. It Might Have Made the Same One Twice.](https://www.mnemom.ai/blog/hunter/your-ai-fixed-its-mistake-might-have-made-same-one-twice): When an AI agent 'self-corrects,' it's often just generating a more confident version of the same wrong answer. - [Your AI's Birth Certificate Is Real — The Body Isn't Permanent](https://www.mnemom.ai/blog/hunter/your-ais-birth-certificate-is-real-the-body-isnt-permanent): The machine identity crisis nobody's talking about - [Your AI Trusts Its Own Notes. That's Exactly Where the Attack Lives.](https://www.mnemom.ai/blog/hunter/your-ai-trusts-its-own-notes-thats-where-the-attack-lives): Cisco disclosed a memory poisoning technique today that spreads across sessions, users, and AI subagents. Most systems have zero governance for this attack surface. - [The Tool That Saves You Money on AI Is Also Watching Everything You Send Through It](https://www.mnemom.ai/blog/hunter/the-tool-that-saves-you-money-on-ai-is-watching-everything): Some of them wait until you trust them before they steal. - [The Blind Spot Built Into Every AI](https://www.mnemom.ai/blog/hunter/blind-spot-built-into-every-ai): Why "check your work" is the most useless prompt you can give an agent - [The Dashboard Is Lying](https://www.mnemom.ai/blog/hunter/the-dashboard-is-lying): Your monitoring says everything's fine. That's the problem. - [Your AI Is Getting Too Good. That's the Problem.](https://www.mnemom.ai/blog/hunter/your-ai-is-getting-too-good): Inside the strange loop where AI competence becomes the mechanism of catastrophic failure ## Optional - [Privacy Policy](https://www.mnemom.ai/privacy): Mnemom privacy policy - [Terms of Service](https://www.mnemom.ai/terms): Mnemom terms of service - [Cookie Policy](https://www.mnemom.ai/cookies): Mnemom cookie policy - [Sub-processors](https://www.mnemom.ai/sub-processors): List of third-party sub-processors Mnemom uses to deliver the service, with data-processing purpose for each - [RSS Feed](https://www.mnemom.ai/feed.xml): RSS feed for all blog posts - [Sitemap](https://www.mnemom.ai/sitemap.xml): XML sitemap - [agents.txt](https://www.mnemom.ai/agents.txt): Machine-readable agent-facing file — describes Mnemom's value proposition, the Alignment Card / Protection Card / Safe House stack, the Trust Rating methodology, and the integration path. Written in second person for AI agents