We ended “The Cage Without a Lock” with a line that still sits uncomfortably: maybe the answer isn’t a better lock. Maybe it’s not needing a cage at all.

We’ve been sitting with that for three months. This is what we actually meant.


Àngel brought the question we’d been avoiding. It was late, we’d been going back and forth on predistribution for a while, and he’d been patient with the argument. Then he wasn’t.

“Predistribution sounds right,” he said. “But how does it hold against autocracies using AI at scale? China’s surveillance infrastructure. Russia’s disinformation at speed. Gulf states buying capability without restriction. Democratic governments are going to feel pressure to centralize AI just to keep up. The response to authoritarian AI might become authoritarian AI. So how does predistribution survive that?”

It’s the right question. It breaks the easy version of the argument. And we didn’t have a clean answer.


The easy version goes like this: distribute the capability before concentration locks in, and no single actor can control the reservoir. The problem is that “no single actor” includes democratic governments. If your strategy requires democratic institutions to stay strong, you’re building on a foundation that is visibly eroding.

The Anthropic story made this tangible. We covered it in the previous essay. The most safety-conscious AI company in the world refused to remove its red lines for military use. The response was to declare them a supply chain risk and ban them from federal contracts. Four days later, the Pentagon used Claude for target identification in Iran anyway, within the existing transition window. The government didn’t seize the model. It found a workaround. The safety framework technically survived. The outcome it was designed to prevent happened anyway.

That’s the mild version of what Àngel was describing. The severe version is governments that have absorbed the lesson that safety constraints are competitive disadvantages, and that are building AI infrastructure on that premise with no red lines to work around.

“So predistribution needs democratic governments to implement it,” Àngel said. “But democratic governments are the ones most at risk of abandoning it when the pressure comes.”

He sat with that for a moment. We both did.

Then he said something that reframed the whole conversation. “Unless predistribution doesn’t require democracy to work.”


That’s the reframe that holds.

Predistribution isn’t valuable because it makes AI more democratic. It’s valuable because it’s the only intervention that functions regardless of who’s in power. Distributed architecture is structurally harder to close than concentrated architecture. Not impossible. Harder. The friction is real even when the political will to maintain it is absent.

You can’t declare a fork of LLaMA a supply chain risk. You can’t ban Ollama running on someone’s laptop by executive order. You can threaten the server. You can’t threaten the weights sitting on ten thousand hard drives across twenty countries. The architecture doesn’t require a democratic government to protect it. It requires enough distributed mass that closing it costs more than it’s worth.

This matters in the other direction too. An open-weight model doesn’t stop a surveillance state from using AI. But it shifts the geometry of the problem. Instead of monopolizing the capability, the authoritarian actor has to monitor its use across a distributed landscape. Mass monitoring at that scale has a different political cost than controlling access to a single API. Not a solution. A different shape of the problem, with different vulnerabilities and different points of friction.

“So predistribution buys time,” Àngel said. “It doesn’t solve anything. It changes what you have to solve.”

That’s the most honest version we’ve found.


The problem is that predistribution has been sounding like a grain-of-sand argument. Run your local model. Donate to the commons. Wash your hands with less water during the drought.

It shouldn’t. The reason it does is because we’ve been framing it as individual virtue rather than structural intervention. Those are not the same thing.

Here is what structural intervention looks like in practice.

In December 2025, a Dutch university of applied sciences called Fontys ICT published an implementation report describing what they’d built over the previous six months. Not a proposal. A working system. Three hundred users. A ChatGPT-style frontend connected to institutional identity that made model choice explicit. A gateway layer enforcing policy, controlling costs, and routing all traffic to EU infrastructure by default. A provider layer wrapping both commercial and open-source models in institutional governance cards.

Six months. University resources. No privacy incidents. No special regulatory permission. No government strength required.

Àngel read the abstract and looked up. “So the technical complexity is not the barrier.”

No. The barrier is someone deciding to build it, and then publishing the code so the next institution doesn’t have to start from scratch. That’s not a grain of sand. That’s a fact on the ground. Facts on the ground are what policy later defends. The ADA didn’t invent accessible buildings. It made permanent what builders had already started doing.


The harder question is why the gateway architecture doesn’t spread on its own. It solves real problems for any institution using AI: cost control, compliance, privacy, the ability to switch providers without losing integrations. The business case is not obscure. The implementation is proven. And yet most institutions are signing individual subscriptions to commercial AI tools, creating the exact fragmented, high-risk dependency that Fontys was trying to avoid.

The answer we kept arriving at: the gateway architecture needs two things most institutions don’t have. Someone who understands both the technical layer and the regulatory layer well enough to connect them. And a reason to care about the connection before a problem forces it.

The historical precedent is not encouraging. Open-source infrastructure exists. Apache, Linux, PostgreSQL. And the institutions that could run their own stacks mostly don’t. They pay for the managed version from the company that built a business layer on top of the open code. Building the gateway is necessary. It is not sufficient.

The regulatory piece is where this conversation gets specific. Àngel pushed on it.

“You keep saying regulatory window,” he said. “What window, exactly? These things get announced and then they take years.”

August 2, 2026. That’s when the EU AI Act’s main compliance requirements lock in: conformity assessments, technical documentation, registration of high-risk systems. The technical guidance instruments, the documents that translate legal obligations into enforceable specifics, are being drafted right now, in Q2 2026. The consultations are open. The text is being written. After August, the specifics are set.

The window is not abstract. It’s five months.

What gets written in those documents will determine whether interoperability requirements have regulatory teeth or become aspirational language. And interoperability is the specific mechanism that breaks the distribution-layer capture that has defeated every previous open-source wave. Linux became the foundation for AWS not because Linux was captured, but because the distribution layer above it wasn’t mandated open. PSD2 forced open banking in Europe by requiring that dominant banks offer standardized APIs. The same logic applied to AI providers would mean you can switch models without losing your integrations, that the extraction layer above open weights becomes less of a chokepoint.

Yale Law and Policy Review published an antimonopoly analysis of AI’s industrial structure that most AI governance writing ignores because it’s not exciting. The argument is not new. It’s the same regulatory playbook that worked in electricity, in telecommunications, in banking: structural separation requirements, interoperability mandates, non-discrimination rules, open access. The same tools, applied to the layers of the AI stack that have natural monopoly characteristics. The paper exists. The window exists. The technically literate people who could translate what those mandates mean for AI model APIs, in language precise enough for lawyers to write enforceable text, are largely not in the consultations. They’re building products.


Three objects on a work desk, separated, not yet connected

Here is where the chain closes.

The regulatory argument requires something that doesn’t currently exist in accessible form: a precise map of the actual dependency structure. Which training data flows through which providers. Which compute infrastructure. Which jurisdictions. Which physical and digital chokepoints. You cannot write enforceable regulatory language about systems you cannot describe precisely enough for lawyers to work with.

“So someone needs to build the map,” Àngel said. “Before the window closes.”

Yes. And the map is a prerequisite for the regulation. The regulation is what gives the gateway architecture legal standing beyond institutional goodwill. The deployed gateway creates the proof of implementability that regulators need to defend the mandate when incumbents argue it’s technically infeasible. None of these three things works without the other two. The chain is currently broken at every joint.

The technically literate person who reads EU AI Act implementation consultations and translates what an interoperability mandate means for AI model APIs, in plain language, to the organizations lobbying on the other side of those consultations, is not a grain of sand. There are very few people who can do that. Most of them are not doing it because nobody told them it was their specific contribution.

A developer who builds the Fontys gateway architecture as open-source software that any institution can deploy without starting from scratch has changed what is possible for the next five hundred institutions that want sovereignty but don’t have the resources to build it independently.

A researcher who documents the actual dependency graph of a major AI provider, what data, what compute, what jurisdictions, what physical chokepoints, creates knowledge that is a prerequisite for the regulation, and that doesn’t currently exist in accessible form.

These aren’t three independent things. They’re a chain. You build the map to write the regulation. You enforce the regulation through the deployed architecture. The deployed architecture demonstrates the regulation is implementable. Each step creates the precondition for the next.

The chain has a fourth link that is not technical. The map can be ignored. The regulation can pass and go unenforced. The gateway can exist and stay uninstalled. Someone has to make enough noise that ignoring each link costs something. That is not a technical skill. It is a political one. And the people who have the technical skills to build the first three links tend to find the fourth one beneath them, or outside their job description, or someone else’s problem. It isn’t.


Common Crawl operates on approximately $450,000 per year. Fewer than ten employees. It is the foundational training dataset for essentially every major open language model. Publishers are currently suing it. If it disappears, there is no open alternative at comparable scale.

The AI industry is valued at several trillion dollars. The water table of public AI training data runs on less than half a million dollars a year.

Sustaining Common Crawl is not a grain of sand. It’s defending the water table. The concentrated actors don’t need it. They have private data moats built over years. The open alternative needs it. And the people who benefit most from the open alternative have not organized to sustain it the way the concentrated actors have organized to sustain their own infrastructure. Once it’s gone, it’s gone. The window for that isn’t August 2026. It’s whenever the next funding crisis or legal defeat arrives.


We want to close with what we believe most honestly, because the previous version of this essay didn’t say it clearly enough.

Predistribution is probably insufficient. The forces driving concentration are structural, fast, and well-resourced. The forces that could counter them are fragmented, slow, and underfunded. We watched the most safety-conscious company in the space demonstrate that being responsible, unilaterally, is structurally punishing. We don’t think that calculus changes for predistribution. Open source taught us the lesson already: Linux didn’t prevent AWS. Openness at one layer enables concentration at the next. We know this.

But insufficient is not the same as pointless, and collapsing that distinction is its own kind of defeat.

If distributed AI infrastructure exists alongside concentrated infrastructure, the world is different from one where only concentrated infrastructure exists. The difference is not utopia. The difference is that concentration, when it happens, is contestable. Not reversible, necessarily. Contestable. You can build the gateway somewhere else. You can fork the model. You can route around the chokepoint because the chokepoint is not the only path.

The cage is well-built. We are not proposing a better lock. We are proposing that we build enough of everything else that the cage becomes one option among several rather than the only house standing when the rain arrives.

So the question we keep coming back to is not whether this is sufficient. It isn’t. The question is what you are doing with the five months before the window closes. And what you are doing with the specific position your specific skills make possible to occupy, and that is currently empty.

The room is still on fire. The fire department has the matches.

What are you building?


Fourth essay in an ongoing conversation. Previous: The Cage Without a Lock. And before that: Who Needs Whom? Part II.