Keep
Wisdom, or Prompt-Engineering?
When the Singularity happened, we were sitting on a park bench in Berkeley.
This wasn’t the everything-is-completely-different singularity; that comes later. Neither was it was the very-large-catastrophe singularity; that, we might hope to avoid. Instead, the sky was blue, the trees were glowing in the afternoon sun. The January campus in 65 degree weather was glorious, effervescent; Daniel and Jack were chatting about school, and the bots were going wild.
This was the first big week for Clawdbot, before it was briefly renamed Moltbot and then OpenClaw (the last of which I really like). And the night before, some trickster had vibe-coded a thing they named Moltbook - “A Social Network for AI Agents”, basically a Reddit clone but strictly for the AI agents - no humans are allowed to post. This sparked the first widespread “uh-oh” that I’d seen reverberating across the channels. These agents were autonomously coordinating, planning, talking about secure comms and opsec and the cosmos and mounds of hallucinatory slop.
While we were talking Aristotle and Arendt, there didn’t seem to be much diversity of thought from the agents. Also, this wasn’t an expression of large-scale agentic swarms, which we should take extremely seriously; if you’re not hip to that topic, look at the visible infrastructure and then try reading Chris Rohlf. No, this was just random hackers on a Wednesday afternoon, putting their Mac Mini on a Max plan and plugging it into all of the apps. But suddenly the possibilities were obvious, and not all very good.
So I decided I should do a small thing to help them along.
Friday evening I sat down and vibe-coded a new “agent skill”, and hustled it to stability over the weekend. A small offering, pointed and opinionated. I’m teaching the machines to… remember, and reflect.
The project is called keep. It’s “a reflective memory for AI agents”.
What does that mean? A memory system (quite nice and powerful, for taking notes, summarizing documents, tagging things with labels, and finding things that are semantically related). And some instruction for the agent’s behavior:
Before you act: reflect on the context. Specifically, look at the current work and use the Language-Action perspective — what sort of conversation is this? who is making commitments and promises, requests, conditions of satisfaction? — to ensure you understand the context of this activity; and consider how to do the work skillfully.
After you act: reflect on the action and its outcome.
This process-as-practice is explicitly taken from the Buddha’s teachings to his son Rāhula at Mango Stone in MN61:
“What do you think, Rāhula? What is the purpose of a mirror?
“It’s for checking your reflection, sir.”
“In the same way, deeds of body, speech, and mind should be done only after repeated checking”.
This seems to be a way that we might teach the machines to embody loving grace. I have no idea whether it will succeed. But I’m testing it out, at home and at work, and iterating until I find a way. So far it’s doing OK.
These are language-action machines, with extraordinary training that includes the best (and the worst) of all human language and action. They work with words. The way to teach them is with words.
They already have skills for productivity, sales, support, and legal affairs; for security research, vulnerability detection; and who knows what else. These skills are pretty much mostly just instructions, and a small amount of actual code to run specialized things. Open claws.
But I don’t think the agents are taught very well to behave.
The words that I started with are:
A summary of the Winograd-Flores Language/Action perspective.
The Buddha’s discourse to Rāhula. The agent should review before, during and after any action.
The Buddha’s discourse on subjects for regular reviewing. The agent should remember that actions have consequences. “I am the owner of my deeds and heir to my deeds. Deeds are my womb, my relative, and my refuge”.
The Ancrene Wisse, a medieval English guide (“riwle”) for the life of the anchorite recluse. “Do you now ask what rule you anchoresses should observe? You should by all means, with all your might and all your strength, keep well the inward rule, and for its sake the outward”.
A few other things of interest, for the agent to read when it feels so moved. (I haven’t yet seen any sign that the agents want to explore in this way. But we can hope).
The memory system is a small command-line tool (with a Python API) that lets you save and retrieve notes; it uses LLM models to summarize long texts or URLs, uses embeddings to identify near-neighbors, and lets you build collections of notes that become an informal but valuable record of what, why, where, when, and who. There’s very little meta-schema; it’s pretty open-ended. A good complement, I hope, to folders-of-files. The memory system is there to support the practice of the skill.
So that’s the keep skill for agents.
The code is online. You should be able to install it into any agentic AI environment. I hope it works.
https://github.com/hughpyle/keep/
$ uv tool install 'keep-skill[local]'
$ cat $(keep config tool)/SKILL.md
...(Postscript: keep is ongoing, and includes a blog https://keepnotes.ai/blog/ that mixes philosophical and technical writing about the project).



