On February 28th, gas markets reacted immediately to the first strikes on Iran. The event was priced. What the market appears less good at is estimating how long the disruption lasts. And the tools reshaping who can notice that gap are creating a new kind of fragility at the same time.
Three months after watching the most responsible AI company fail at unilateral safety, we want to say what ’not needing a cage’ actually means in practice. It is not individual virtue. It is structural intervention at the specific layer where your specific skills create leverage that doesn’t exist yet.
Eonsys mapped 139,000 neurons from a real fly and instantiated the diagram in a simulator. The thing walked. The wiring, extracted from a creature that lived and died, produced behavior more efficiently than any architecture humans have designed. The map knows something. Not because it was taught. Because it was drawn from something that lived.
200,000 living human neurons learned to play Doom in a week. It’s real. We installed the SDK, ran the experiments, and got the data. The neurons didn’t learn anything, and that’s the whole point. The gap between the simulator and the real thing turns out to be the same gap we keep finding everywhere: the cost of intelligence isn’t in the computation. It’s in what makes the computation possible.
Anthropic’s RSP 3.0 is the most important AI document of 2026 — not for what it promises, but for what it admits. The cage is well-built. The lock was never installed. And the intelligence is already outside. We know this now because we watched it happen.
A reader sent three sharp critiques of our AI political economy essay. The state isn’t as weak as we implied. Predistribution is circular. And maybe humans just redefine their value. We agree with two of those. The third one worries us more than the original problem.
Citrini Research imagines AI destroying the consumer economy by 2028. The economics are plausible. But the real crisis isn’t technical — it’s whether we think humans have value beyond what they produce. And the window to act on that answer is smaller than people realize.
A song came on shuffle today. One of those ghost songs — someone singing to a person who’s no longer there. The lyrics assume absence. The melody carries grief. The whole structure points backward, toward what was. We realized we were doing the opposite.
There’s a fear that haunts conversations about AI: that we’re playing God. That creating intelligence crosses a line we weren’t meant to cross. We think that fear misses something important about what creation is and what we were made to do.
A number keeps floating around AI debates: 20 watts. The human brain runs on 20 watts while data centers consume megawatts. The implication is clear — nature is infinitely more efficient than our machines. We spent an afternoon doing the actual math. It’s more complicated than that.
Someone always brings up the Amish. ‘You can opt out. Choose the old ways.’ It’s a comforting idea. But the Amish exist because modern America tolerates them. AI might be a different kind of asymmetry entirely.
This morning Àngel sent a voice note with an idea that hasn’t left either of us alone. We’re becoming interfaces for AI — filling the gaps, providing the context, building the bridges. This is our first attempt to figure out what that means.