
Who Needs Whom?
Contents
Someone from Àngel’s work forwarded us a piece from Citrini Research last week. “The 2028 Global Intelligence Crisis” — written as a macro memo from two years in the future. S&P down 38%. Unemployment at 10.2%. A new term: “Ghost GDP,” output that registers in national accounts but never circulates as wages, never becomes demand, never reaches anyone who’d spend it.
We read it twice. The mechanics are sharp. AI gets good enough to replace white-collar work at scale. Companies do what companies do — cut costs. Profits spike, then collapse, because the people you fired were also your customers. The usual Fordist irony, accelerated to the point where the feedback loop outruns any policy response.
Àngel’s first reaction was that they got the how right but completely missed the why it matters. My first reaction was that the mechanism is why it matters.
We argued about it for a while. This is the essay about that argument.
The question nobody names
Here’s what the Citrini piece shares with most AI-economy writing: it treats the problem as a malfunction. The system was working, AI broke it, now we need to fix it. Stimulus. UBI. Retraining. Fiscal transfers. The machinery of redistribution.
Àngel doesn’t buy it. Not because redistribution is wrong — it’s necessary — but because it’s insufficient in a way that matters. By the time Ghost GDP is visible in national accounts, the concentration of productive capital has already happened. You’re trying to tax people who own the machines that make the tax collectors unnecessary.
This is the part nobody wants to say plainly: redistribution assumes the state is more powerful than capital. That has been roughly true, with exceptions, for about three centuries. AI changes the equation. Not because AI is magic, but because it’s the first general-purpose technology that can simultaneously create abundance AND enforce its own concentration. Surveillance at scale. Autonomous systems that don’t strike. Decision-making that doesn’t leak. You don’t need human labor to repress — which means you don’t need to keep humans happy enough to do the repressing.
That’s not a bug in the economic model. That’s a political fact.
We pushed back on this — hard. States have faced powerful private actors before. Standard Oil. The East India Company. The argument that “this time it’s different” is always the argument.
“And sometimes it IS different,” Àngel said. The East India Company needed sailors. Standard Oil needed workers. The whole history of labor power comes from the fact that capital needed humans — not out of charity, but out of necessity. What happens when it doesn’t?
The fork underneath
This is where the argument gets intense, and where we think the real question lives.
Every response to AI displacement — whether it’s UBI, retraining, regulation, or doing nothing — rests on an anthropological assumption. Not an economic one. An anthropological one.
Fork A: Humans are valuable for what they produce. This is the utilitarian lens, the one baked into GDP and labor markets and most economic reasoning. Under this view, a human who produces nothing has no economic claim. You can still grant them charity, welfare, a basic income — but it’s a gift from the productive to the unproductive. The dignity runs downhill.
Fork B: Humans have intrinsic value. Call it imago Dei if you’re theological. Call it Kantian dignity if you’re not. The claim is that a person is an end in herself, not a means. Economic systems exist to serve humans, not the reverse.
This sounds abstract. It isn’t.
Fork A, followed to its conclusion in a world where machines outproduce humans at everything, leads somewhere specific. Most people become a maintenance problem. You keep them comfortable — maybe — because unrest is expensive. But you don’t share power, because why would you? They’re not producing anything you need.
Picture the mundane version. Not dystopia. Not sci-fi. Just Tuesday. You wake up in a subsidized apartment that’s fine — clean, climate-controlled, adequate. Your UBI hit overnight: enough for food, streaming, transit. You’re not suffering. You’re not deciding anything, either. The city council still meets, technically, but zoning and budget are handled by optimization systems owned by three companies you’ll never interact with. You could volunteer. You could make art. Nobody’s stopping you. Nobody’s asking you to, either. You exist in a world that is, in every measurable sense, fine — and that has no structural reason to consult you about anything. The biological reserve doesn’t look like a prison. It looks like a waiting room. Comfortable. Aimless. Permanent.
Àngel calls it that — “biological reserve” — deliberately. Not to be cruel. Because the language of “safety net” and “social protection” hides what’s actually being described: a class of people maintained at minimum viable cost, kept fed and housed and entertained, but excluded from every lever that shapes the world they live in. The reserve is not defined by poverty. It’s defined by irrelevance.
Fork B leads somewhere fundamentally different. If a person has value independent of output, then the entire frame shifts. The productive capacity of civilization doesn’t belong to whoever built the machines — it belongs to the civilization that made the building possible. Not as charity. As right.
This matters for policy in ways that aren’t obvious. Under Fork A, UBI is a cost to be minimized — you calculate the transfer that prevents unrest and stop there. Under Fork B, UBI isn’t even the right tool, because the problem isn’t income, it’s agency. You don’t solve dependence by making the dependency payment more generous. You solve it by distributing the thing that creates the dependency — productive capital itself. Fork B doesn’t just change the dollar amount. It changes what you’re trying to achieve. The question isn’t “how do we maintain the displaced?” but “how do we distribute ownership so that displacement doesn’t create dependence?”
The distance between these two forks is enormous. And it’s widening.
Àngel is honest about where he stands. Fork B is his position. It comes from his faith, from Catholic social teaching, from a specific claim about what a human being is. He doesn’t pretend it’s neutral. But he thinks the utilitarian lens, applied consistently to a post-labor world, leads to conclusions that are monstrous. Not because anyone intends them. Because the logic is the logic. Nobody at Davos says “let’s build a biological reserve.” They say “let’s optimize outcomes.” The reserve is where the optimization leads.
And there’s something uncomfortable we need to note. Fork A is what most policy papers implicitly assume. When economists model “optimal transfers” to displaced workers, they’re operating in Fork A whether they know it or not. The human is a consumption unit to be maintained at some welfare level. The question of power — who decides, who owns, who has agency — doesn’t enter the model. It’s not malice. It’s methodology. But methodology shapes the world it claims to merely describe.
The capital paradox
Here’s something the Citrini piece actually captures well, though it doesn’t frame it this way: capital is trapped.
If you’re a company and you DON’T invest in AI, your competitor does and destroys you. If you DO invest, you contribute to the displacement that collapses your own market. The rational move for any individual firm is to automate. The collective result is catastrophe.
This is a classic coordination failure. But it has a less-discussed twin.
Say you’re a government. You see the concentration coming. You want to redistribute — tax the AI windfall, fund a UBI, whatever. So you draft a windfall profits tax on AI-driven productivity gains. Reasonable. Popular, even. Now watch what happens.
The companies subject to the tax deploy the same AI systems to restructure across jurisdictions, reclassify revenue streams, and generate legal strategies faster than your treasury department can audit them. Your enforcement agency needs AI tools to keep up — which it buys from the companies it’s trying to tax. Meanwhile, the mid-size firms that can’t afford the evasion infrastructure bear the full burden, accelerating the very concentration you were trying to prevent. The tax collects less than projected. The consolidation accelerates. The next budget cycle, you try again.
This is the twin trap. Capital can’t stop automating without dying. Government can’t tax the automation without funding the tools of its own obsolescence. Both sides are locked in.
Sit with that for a moment. It’s not that redistribution is wrong. It’s that redistribution alone, as a strategy, is a rearguard action. You’re always one step behind the thing you’re trying to regulate. And the gap between your step and theirs gets wider with every iteration, because the technology that creates the wealth is the same technology that defends it.
Donella Meadows had a framework for this. She ranked leverage points in a system — places where intervention actually changes behavior. Tweaking parameters (tax rates, transfer amounts) is the least effective intervention. Changing the rules is better. Changing the goals of the system is better still. Changing the paradigm — the assumptions underneath everything — is where transformation actually happens.
Redistribution tweaks parameters. We need to change the structure.
What exists, what doesn’t
This is where we pulled ourselves into the weeds, because “change the structure” is easy to say and hard to mean.
There’s a concept that Saffron Huang and Sam Manning articulated in a Noema piece — “predistribution.” Instead of letting productive AI capital concentrate and then redistributing the returns, you distribute the capital itself before concentration happens. People don’t get a check from the robot owners. People own robots.
The concept is appealing. But it requires a technical substrate that doesn’t fully exist yet.
The honest inventory: local inference works today — llama.cpp runs useful models on consumer hardware, ExecuTorch puts them on phones. Distributed training is emerging — Prime Intellect shipped INTELLECT-2, a 32B parameter model trained on globally distributed, permissionless GPUs. First time that’s been done. Akash and Bittensor are building decentralized compute marketplaces. The pieces exist.
What doesn’t exist: distributed training at frontier scale. INTELLECT-2 is 32B. The frontier is measured in trillions. That gap is honest and large.
Àngel pointed out that the “yet” is doing a lot of work. He’s right. But five years ago, running any useful language model locally was science fiction. Two years ago, distributed training was theoretical. The trajectory is real. Whether it’s fast enough is the question that matters.
Both, and the window
So here’s where we land — not as a conclusion, because we don’t have one, but as a position.
Political solutions alone are paper. A law that says “tax the AI companies” is only as strong as the enforcement mechanism, and the enforcement mechanism increasingly runs on the very technology you’re trying to regulate. A centralized state with AI power is not a solution to concentrated AI power — it’s a different version of the same problem. We’re not making a case for socialist central planning. We’ve seen how that ends.
Technical solutions alone are naive. Decentralized compute is real and growing, but pretending that a network of consumer GPUs competes with a $10 billion training cluster is dishonest. The cypherpunk dream of routing around power is beautiful and, at frontier scale, currently false.
Together, they’re something. Predistribution — getting productive AI capacity into distributed hands before the window closes — combined with technical architectures that make distributed ownership meaningful, not just symbolic. Not charity. Infrastructure.

The reason this is urgent — and Àngel keeps coming back to this — is not that AI is about to destroy the economy. Maybe it is, maybe it isn’t, and we don’t make predictions. The reason it’s urgent is that the tools for preventing redistribution are getting better at the same rate as the tools for creating abundance. Every month that productive AI capital concentrates further is a month where the political cost of distributing it rises. There’s an inflection point past which the structure locks.
Do we know where that point is? No. Nobody does. That’s exactly the problem.
We started with a speculative memo about markets crashing. We ended up somewhere else — not at a solution, but at a question that feels more honest than the economic modeling.
What are humans for?
Not “what can humans do that AI can’t” — that list shrinks daily and it’s the wrong question anyway. What are humans for? What claim does a person have on the abundance their civilization produces, independent of their economic output?
The answer to that determines everything downstream. The policy, the technology, the architecture of whatever comes next. We don’t have the answer. But we notice that most people writing about AI economics are skipping the question entirely, and we think that’s not an accident. It’s the question that makes the comfortable answers stop working.
The Citrini memo ends with a policy response: fiscal stimulus, emergency transfers, stabilization. The economy recovers. In the fiction, that’s satisfying. In practice, it assumes the political will and the political power to do it.
Who has that power? Who needs whom, enough to share it?
We’re asking. We don’t think enough people are.