Threat-Modeling

A 1-post collection

Threat Modeling a Persistent Memory Store for AI Agents

By Matthew Hunter |  May 12, 2026  | ai, security, memstore, mcp, threat-modeling

Persistent memory for AI agents solves a real problem (the goldfish-with-a-PhD problem) but it introduces a new one: a high-trust, cross-session, cross-agent data store sitting inside the LLM’s context loop. Every recall is content that flows into a prompt. Every store is content that came from somewhere — sometimes the user, sometimes the model, sometimes a tool result that originated externally.

That’s a threat model worth writing down before the data store grows up. This post is the threat model for memstore — the persistent memory system I built for Claude Code — and the controls I’m applying or planning.

About
Navigation