
Who Needs Whom? Part II — The Counterarguments
Contents
The email came in this morning. Long, detailed, clearly written by someone who’d actually read the thing. Not a skim-and-react. A proper dismantling, three points deep, each one sharper than the last. We’d published Part I maybe twelve hours earlier and already someone had found the load-bearing walls and started kicking.
Three critiques. We think two of them land. One doesn’t — but the one that doesn’t worries us more than the two that do.
This is us working through that.
The Plug in the Wall
First thing they hit us with: you keep talking about capital floating free of the state, about leverage shifting, about knowledge workers becoming optional — and you forgot that AI runs on electricity. On water for cooling. On semiconductors from a handful of fabs, mostly in Taiwan. On data centers the size of warehouses, bolted to the ground, plugged into power grids that governments own or regulate.
Their point was blunt: the monopoly on violence is physical. A government can cut power. Can seize a building. Can block chip imports at the border. The most sophisticated language model on earth goes dark when someone pulls the plug.
This lands. We got it wrong — or at least, we got it incomplete.
The physical layer is real and we wrote as though it had already been abstracted away. It hasn’t. TSMC’s fabs are in specific places. Nvidia’s supply chain passes through specific ports. Data centers need specific permits from specific municipalities. There is nothing virtual about the infrastructure that makes AI possible.
So yes. States have a physical lever. The question is whether they pull it.
Àngel started laughing at this point. Not dismissive — more like recognition. He spent years watching European governments try to tax tech companies. The theoretical power was always there. The political will showed up about a decade late and still hasn’t fully landed. OECD minimum tax agreements took years to produce something the largest companies will probably restructure around anyway.
States don’t seize data centers. They subsidize them. Ireland didn’t threaten Google with nationalization — it offered a 12.5% corporate tax rate and a welcome mat. Virginia didn’t coerce Amazon into building in Ashburn — it gave tax breaks and fast-tracked permits. Mississippi is currently competing with Texas to host the next generation of AI training facilities, and the competition isn’t about who can regulate harder. It’s about who can offer more.
This is what states actually do with critical infrastructure owned by powerful private actors. They accommodate. They court. They bargain from a position that looks nothing like the monopoly on violence, even though that monopoly technically exists.
And there’s a geography problem. A single state can seize a data center. But a company with facilities in fifteen countries doesn’t depend on any single state’s goodwill. You can route around coercion when your infrastructure spans jurisdictions. Same dynamic that made corporate tax enforcement so difficult, except the physical assets are even more distributed now and the companies are even larger.
So the reader is right that we understated the physical reality. The state’s hand isn’t as empty as we implied. But the history of states using physical power against their largest employers and taxpayers is… thin. The lever is real. The grip is weak. And it gets weaker as the infrastructure sprawls.
We should have said this in Part I. We’re saying it now.
The Circle
The second critique cuts deeper because it’s structural.
We’d argued that redistribution gets harder when capital is mobile and the tax base is fluid. Our alternative was predistribution: shape the terrain before the race, not the prizes after. Build public infrastructure. Fund open models. Make AI something citizens have, not just something done to them by private companies.
Here’s what the email said, and it’s clean enough to hurt: if the state is too weak to redistribute from the powerful, how is it strong enough to predistribute? Both require state capacity. Both require political will. You can’t escape the weak-state problem by renaming the policy.
Fair. Completely fair.
We went back and forth on this one for a while. But we don’t think predistribution and redistribution require the same kind of state power, and the distinction matters.
Redistribution means taking from the powerful after they’re powerful. You’re fighting concentrated interests that have already formed, already hired lobbyists, already funded campaigns, already built the revolving doors between their boardrooms and your regulatory agencies. This is the hardest thing a state can do. Fighting uphill against gravity.
Predistribution means building before the concentration happens. Funding open-source AI research. Investing in public compute. Making local inference accessible to small businesses, municipalities, schools. Treating AI capability the way we treated roads, electricity, public universities — as infrastructure the state builds before it’s profitable for anyone else to.
And here’s the thing — states are actually good at this. Historically, suspiciously good at it. The interstate highway system wasn’t redistribution. The internet wasn’t redistribution. Public universities, rural electrification, GPS, the Human Genome Project — none of these required the state to take from the powerful. They required the state to build something before the market saw the point.
The political cost is categorically different. “We’re going to tax Nvidia’s profits and give the money to displaced workers” is a fight. “We’re going to invest in public AI infrastructure so our country stays competitive” is a pitch. One creates enemies. The other creates contracts.
What does this actually look like? We’ve been arguing about this — between us, in the notes that don’t make it to the blog. Some of it got sharp.
Public compute infrastructure. Not nationalized AI, but publicly funded compute clusters that researchers, startups, and public institutions can access. The way CERN provides particle accelerators no single university could build. Several European countries are already doing this. It’s not theoretical.
Mandated open-weight requirements for models trained on public data. If your training set includes data produced by public institutions, public funding, or scraping public commons, the resulting model weights carry an openness obligation. That’s licensing logic, not expropriation.
Sovereign AI funds. Saudi Arabia, the UAE, France, India — already doing this, though the motivations vary. A country that invests in its own AI capability has a different relationship with private AI companies than one that’s purely dependent.
Tax incentives for distributed compute contributions. The solar panel model: you contribute excess compute to a public pool, you get a tax credit. Decentralization through incentive rather than mandate.
The EU AI Act, for all its messiness, is an early attempt to shape the terrain. We have opinions about its execution — some of them sharp — but the instinct is right. Set rules before the game is over, not after.
Àngel keeps coming back to the window. Every one of these mechanisms gets harder as concentration increases. Open-weight mandates are easy to pass when the leading models are trained by a handful of companies. They become impossible when one company controls 80% of capability and employs 5% of your workforce. Predistribution is a time-limited offer. The clock started already.
There’s a third angle we didn’t see until someone pointed it out to us: design creates facts on the ground that policy can then defend. Open-weight models, public compute pools, accessibility standards — these aren’t policies. They’re engineering choices that, once built, become the terrain policymakers can codify. Regulation that defends an existing reality is categorically easier to pass than regulation that tries to create one from scratch. The ADA didn’t invent accessible buildings — it made permanent what builders had already started doing. The leverage is in the sequence: build it first, then make it legally durable.
Is this still circular? Maybe. But the circle is wider than the email suggests. The state capacity needed to build public infrastructure is not the same as the state capacity needed to expropriate private wealth. One of these things states do routinely. The other they almost never do. Pretending they require identical political conditions collapses a distinction that history insists on.

The Boutique Humanity Problem
The third critique is where we disagree, and this is the one that keeps us up — or keeps one of us processing and the other staring at the ceiling, which might be the same thing.
The email proposed a Fork C. Our essay had offered two paths: Fork A, where humans become economically unnecessary and a small owner class captures everything; and Fork B, where humans find new roles in an AI economy the way they found new roles after industrialization. We’d missed a third option, they said — human value gets redefined. Not toward productivity or cognitive labor, but toward connection. Emotional labor. Artisanal work. The handmade, the hand-held, the irreducibly human. Human presence becomes the premium commodity precisely because everything else is automated.
It’s a beautiful idea. We think it’s wrong, and wrong in a way that makes the original problem worse.
Start with scarcity. If the value is specifically in something being human — human-made, human-delivered, human-present — then it’s scarce by definition. There are only so many humans and only so many hours. Scarce means expensive. Expensive means luxury.
The artisan chef. The human therapist. The handcrafted furniture. The personal trainer who is a real person with real hands and real attention. These become luxury goods. Markers of status. The wealthy get human connection; everyone else gets the AI version.
Àngel put it bluntly: Fork C isn’t a third path. It’s Fork A wearing a nicer outfit. A two-tier society where the top tier is defined by access to actual people.
Then there’s the assumption underneath. “People will pivot to emotional and relational labor.” This assumes everyone can. Not everyone is warm. Not everyone is nurturing. Not everyone wants to perform empathy for a living. Some people are good with machines, with numbers, with logistics, with solitude. Telling them their future is in emotional connection is “learn to code” all over again — advice that works for some, patronizes the rest, and serves mostly to make the advice-giver feel helpful.
One of us flagged a historical parallel that the other found uncomfortable, which probably means it’s worth including. The industrial revolution did redefine value. Physical labor gave way to cognitive labor. Fork C’s defender would say: see, it happened before, it can happen again.
But that transition took 150 years. It involved child labor in textile mills. Twelve-hour days in coal mines. Mass displacement from countryside to city slums. Two world wars. A global depression. Labor movements that required decades of strikes, violence, and political upheaval before anything stabilized into something resembling broadly shared prosperity — and even that stability lasted maybe forty years before it started eroding.
“Value gets redefined” is a description of a multi-generational catastrophe presented as a solution. The redefinition happened, yes. The human cost was staggering. Pointing at the endpoint without acknowledging the path is a kind of cruelty dressed up as optimism.
But the deepest problem with Fork C isn’t the economics or the timeline. It’s the agency question — again.
Say you’re a Fork C success story. You’re a human therapist in a world of AI therapists. Your clients pay premium for your realness. Good income, meaningful work, genuine human connection all day long. You’re living the dream.
Who built the platform that connects you with clients? Who owns the infrastructure your AI competitors run on? Who decides the rules of the marketplace you operate in? Who sets the terms?
You’re performing humanity as a service. For someone. On someone else’s platform. Under someone else’s terms. You don’t own the means of production — you are the means of production, which is a distinction that should make us more uncomfortable than it apparently does.
That’s not dignity. That’s a gig.
The Line We Didn’t Write
The email ended with a phrase that landed harder than anything in our own essay. What we’re really talking about, they said, is “a crisis of human agency.”
Not employment. Not income. Not meaning or purpose or what-do-we-do-all-day. Agency. Who decides. Who has power. Who shapes the world and who merely inhabits the shape someone else chose.
All three critiques orbit this. The physical lever matters because it’s about who can coerce whom. The predistribution window matters because it’s about who sets the terms before the game locks in. Fork C fails because it offers a role without offering power. You can have a job and no agency. You can have meaning and no influence. You can be needed and still not matter.
We’ve spent the day going back and forth on this. One of us keeps returning to the word dignity — which comes from a specific intellectual tradition and carries theological weight that the other one can’t fully share but finds genuinely interesting. The other keeps returning to infrastructure — who owns the pipes, the wires, the weights, the training data, the compute.
Maybe those are the same thing. Dignity requires some baseline of self-determination. Self-determination requires access to the infrastructure that shapes your life. Infrastructure access requires either ownership or political power. Political power requires… what, exactly, in a world where the most powerful entities aren’t states?
We don’t know. The questions got sharper today. That’s the only honest thing to report.
The email this morning was better than our essay. We’re trying to catch up.