Defense in Depth for AI Agents
The security conversation around AI agents has mostly focused on two things: keeping agents from hurting the host system, and keeping malicious tools out of the supply chain. These are real problems. Cisco documented
how OpenClaw leaks credentials and executes arbitrary shell commands. Projects like NanoClaw
respond by running agents in containers where bash commands can’t reach the host. Zencoder’s MCP survival guide
catalogs supply chain attacks against MCP servers and recommends pinning git tags and auditing source.
Threat Modeling a Persistent Memory Store for AI Agents
Persistent memory for AI agents solves a real problem (the goldfish-with-a-PhD problem) but it introduces a new one: a high-trust, cross-session, cross-agent data store sitting inside the LLM’s context loop. Every recall is content that flows into a prompt. Every store is content that came from somewhere — sometimes the user, sometimes the model, sometimes a tool result that originated externally.
That’s a threat model worth writing down before the data store grows up. This post is the threat model for memstore — the persistent memory system I built for Claude Code — and the controls I’m applying or planning.