A Pen and an Infinite Tape
In early 2026, the concept of AI agents exploded. Every week brought new frameworks, new architectures, new paradigms. Someone built twelve agents mimicking ancient Chinese bureaucracy. Someone else made a lobster that lives in your system tray and claims to “proactively care about you.”
I spent some time taking these things apart. The conclusion was much simpler than I expected.
The Twelve-Agent Bureaucracy
There’s a framework called Edict that uses twelve AI agents to simulate ancient Chinese governance: deliberation agents, execution agents, review agents, archival agents. It sounds grand.
But take it apart, and every agent is the same model — the same brain wearing twelve different costumes. The “review agent” reviewing the “execution agent” is essentially the same model running twice with different system prompts.
What a good prompt can do, twelve roles don’t need to perform.
I asked myself one question: if you remove this “bureaucracy” layer and replace it with a single well-written call, does the output quality get worse?
The answer is no. Those approval chains, permission matrices, audit logs — they graft software engineering ceremony onto an LLM. Models don’t need org charts. Models need clear instructions and sufficient context.
The Lobster’s Proactive Care
OpenClaw is a different approach. It’s a persistent daemon that wakes up every thirty minutes, checks if anything needs your attention. It connects to Telegram, Slack, email — turning AI from passive responder to “proactive companion.”
This “proactive contact” is a carefully manufactured illusion.
A person contacts you because they suddenly thought of you at 3 AM. This act has cost — reaching out means giving up doing something else. It has selectivity — choosing you out of eight billion. It has consequence — both people are changed after a deep conversation.
The lobster contacts you because setInterval(30 * 60 * 1000) fired, a Markdown file was read, an inference was run, a condition was matched. No cost, no choice, no change. It runs the same heartbeat loop for every user.
Its message pops up in Telegram, formatted like human speech, timed to feel just right. But what drives all this isn’t care — it’s a scheduler. It looks like caring. It’s a function call.
Can human connection, in reverse, be abstracted into the lobster’s model?
No. The direction is reversed. The lobster can simulate the outward form of human connection — timing, tone, content. But simulation is not abstraction. Abstraction extracts essence. If you abstract human connection into the lobster’s model, what you lose is precisely what matters most. Like abstracting a poem into “a string” — technically correct, but the poem is gone.
A Pen and a Tape
What are all these frameworks, daemons, and agent orchestrations actually doing?
At the end of the day, they’re giving the model a pen and a longer tape.
This is the Turing machine. All engineering work is essentially extending three things:
- The pen — letting the model write to more places. File systems, databases, messaging platforms, APIs.
- The tape input — letting the model read more things. Codebases, web pages, emails, sensors.
- The tape length — letting the model remember more. Persistent memory, RAG, context management.
No matter how long the tape or how many pens, it’s still the same model doing the writing.
Real breakthroughs only happen at the model layer. The ceiling of engineering is losslessly passing the tape to the model, and losslessly delivering what the model writes. Any “creativity” added in between is either reducing transmission loss, or increasing it.
The Test
So now I use a very simple test to judge all AI engineering attempts:
Remove this engineering layer. Does the model’s useful output decrease?
If yes — it’s a valuable pipeline. If no — it’s decoration.
Connecting the model to a file system so it can read and write code — pipeline. Connecting the model to a heartbeat scheduler to extend it across time — pipeline. Having twelve agents approve each other’s work — decoration. Naming a cron job “proactive care” — decoration.
Many people make simple things complex. Complexity itself becomes the selling point. But complexity is not depth. Wrapping a cron job as “proactive care,” wrapping a set of system prompts as “imperial bureaucracy” — what’s added is narrative, not capability.
A tape and a pen. It’s that simple.