HUMOR: 35%
In Interstellar, TARS has a configurable humor setting. Cooper runs it at 75%.
It’s played for laughs, but it’s also one of the more honest things in the film:
personality as a dial, mood as a parameter.
I gave Claude Code the same feature.
My AI coding assistant runs with a UserPromptSubmit hook — a small script that fires
on every prompt and injects context into the conversation. It was already injecting the
current date and time. I decided to also inject a humor percentage, dynamically computed
from the hour of day:
Operational Lessons from 1,500 Memstore Facts
Building a persistent memory system for AI agents is one problem. Operating it at scale
is a different one. After two months of daily use, memstore holds around 1,500 active facts
across a dozen projects — project architecture, coding conventions, design decisions,
cross-cutting invariants, and roughly a thousand symbol-level descriptions generated by
running an LLM over every Go function in every codebase I work on. The system works. The
recall pipeline surfaces relevant context on every prompt without manual search. But getting
from “works” to “works well” required a series of scoring adjustments, noise filters, and
feedback mechanisms that the original design didn’t anticipate.
Fact Supersession: Version Control for Knowledge
Most memory systems for AI agents treat knowledge as a key-value store. Write a fact, overwrite it later, old value is gone. That works for simple preferences — “use dark mode” doesn’t need a paper trail. But knowledge that evolves over time is a different problem. When a design decision turns out to be wrong, or a project’s architecture shifts, or a dependency gets replaced, you don’t just want the current answer. You want to know what you believed before, when it changed, and ideally why. Losing that history means losing the reasoning trail, and reasoning is the expensive part. Memstore’s supersession system brings version control semantics to AI memory: facts get replaced, not erased, and the full chain of revisions is preserved.
Proactive Context Injection with Claude Code Hooks
Claude Code sessions start with amnesia. Every conversation begins cold — no memory of what you worked on yesterday, what invariants matter in the file you’re about to edit, or what tasks are still pending from last week. CLAUDE.md helps by injecting static project context, and MCP tools let the model search for facts on demand. But both of those require something to go right first: either the static file happens to cover the relevant topic, or the model decides to search before acting. In practice, the model often doesn’t search. It plows ahead with what it has, and the most valuable context — the constraint you documented last Tuesday, the task you left half-finished — stays in the database unsurfaced. This post is about closing that gap with hooks: small scripts that fire at specific points in the Claude Code lifecycle and inject relevant context automatically, before the model even knows it needs it.
Building Persistent Memory for AI Agents
AI coding assistants are goldfish with a PhD. They can solve complex problems within a single session — refactoring a module, debugging a race condition, designing an API — but the moment the conversation ends, everything they learned about your project evaporates. After months of building software with Claude Code, I found myself re-explaining the same project conventions, the same architectural decisions, the same mistakes we’d already caught and fixed, at the start of every session. CLAUDE.md files help, but they’re static and they don’t scale. You can’t stuff a dozen projects’ worth of design context into a single markdown file. So I built memstore: a persistent memory system that gives AI agents durable, searchable knowledge across sessions, projects, and machines.
Transcribing D&D Sessions with WhisperX and Speaker Diarization
I play in two weekly D&D groups and write session reports as narrative prose from the characters’ perspectives. The reports expand on what happened at the table, adding dialog and internal monologue in each character’s voice. This workflow has evolved through several iterations, each one solving a problem the previous version left on the table.
How it started
The first version was simple: play the session, take notes, write the report from memory afterward. This worked when I had time, but a four-hour session generates a lot of material, and between work and life, writing sometimes slipped by a week. By then the details had faded. The bullet-point notes I’d scribbled during play were thin on dialog and light on the small moments that make session reports worth reading.
Hacker versus cracker
By Matthew Hunter
| Apr 2, 2023
| gcih In the early days of the internet, and even before that, there was a distinct difference in the terminology used for the people who obtained unauthorized access to computer systems. The term hacker meant someone who created an interesting hack, usually something interesting that used a system – not necessarily even a computer system – to do something outside its design intent. A Rube Goldberg machine
is a good example of a hack. So is playing music with printers
. Conversely, cracker was applied to people who broke into computer systems for nefarious purposes. There was often some overlap between the two, as people making interesting hacks often didn’t have authorized access to the systems they were using.
GIAC Certified Incident Handler
By Matthew Hunter
| Mar 29, 2023
| gcih Last weekend, I took the certification exam to become a GIAC certified incident handler
. Both the exam and the course material leading up to it were interesting enough to deserve a few comments.
One thing I was moderately surprised by in the SANS course
was the initial focus on Linux shell tools and Windows Powershell. I’ve been using Linux for a long time, so there weren’t any surprises there. The Powershell material was new to me.