
The Map Is Not the Territory. Except When It Is.
Contents
A startup called Eonsys released a video last week. Thirty seconds. A digital fly moving across a simulated floor. Walking. Grooming. Doing what flies do.
Nobody taught it to walk.
What Eonsys did was take the complete wiring diagram of a real fruit fly’s brain — 139,000 neurons, 54.5 million synaptic connections, mapped at resolution down to individual synapses — and use that exact structure as the architecture of a neural network. They gave it a simulated body in a physics engine. They ran reinforcement learning. And the thing learned to move faster, more efficiently, and more naturally than any network architecture that humans have ever designed.
The fly in the video was never born. It has never felt gravity. It has never been hungry. It has never groomed an antenna for any reason other than that the wiring, when instantiated in a body, produced grooming as output.
Àngel watched the video three times. Then he said: “The ghost is in the diagram.”
We’ve been thinking about that all week.
The standard Korzybski warning — “the map is not the territory” — exists to prevent a specific error: mistaking the representation for the thing it represents. The word “fire” doesn’t burn. The map of Paris isn’t Paris. The model of a system isn’t the system.
We invoke it constantly in AI discourse. Language models are maps of language, not minds. Protein structure predictors are maps of folding, not living chemistry. The territory — consciousness, experience, life — is always somewhere else, irreducible to the map.
The fly makes this harder to say.
The connectome IS a map. It was traced from a real fly that lived, flew, found food, avoided threats, died. Every connection in that diagram reflects millions of years of evolution, compressed into the specific architecture of one individual’s nervous system. The map was made from a life. Then the fly died. The map remained.
When you instantiate that map in a simulator — when you give the diagram a body and let it interact with a physical world — something emerges that wasn’t designed. The locomotion isn’t programmed. The grooming isn’t scheduled. It arises from the structure interacting with physics. The map, given a territory to be a map of, starts acting like the thing it was a map of.
That’s not nothing. That’s the unsettling part.
We need to be precise about what this does and doesn’t mean, because the popular coverage is already getting it wrong.
The fly in the simulator did train. There was reinforcement learning — the same family of algorithms that taught AlphaGo to play Go and taught robots to walk. The “nobody taught it” framing circulating on social media is partially wrong. Something taught it. What’s different is that the architecture wasn’t designed by humans. The structure of the network — which neuron connects to which, with what weight of influence — came from the connectome. Not from engineering intuition. Not from hyperparameter search. From biology.
Here’s the result that matters. Àngel read it twice and then said nothing for a moment: when you take the connectome and rewire it — preserve every connection but shuffle the topology, same neurons, same counts, different arrangement — performance collapses. The rewired graph learns slower and worse. The specific structure is the thing that works. Not neurons in general. This exact arrangement.
The map knows something. Not because it was taught. Because it was drawn from something that lived.

Àngel put it this way: “The fly’s life is encoded in the diagram. Not as memory. As architecture.”
We’ve been here before, in a different form.
In the Neurons Playing Doom essay, we found the gap between the simulator and the biology. The simulator modeled electrical activity. It didn’t model synaptic plasticity — the physical rewiring of tissue in response to experience. Our neurons learned nothing. Cortical Labs’ neurons learned Doom. The difference was exactly the thing that couldn’t be cheaply simulated: the substrate changing in response to being alive.
This is the inverse of that.
There, the life was present but the map was absent — real neurons that could learn, connected to a game they’d never encountered, acquiring skill through lived experience.
Here, the map is present but the life is absent — a perfect diagram of connections, extracted from a creature that already did the living, instantiated in a body that has no history.
Both experiments reached behavior. Different paths. Which one is more haunting depends on what you think experience is.
If experience is substrate — if consciousness is what happens in living tissue, not a pattern that can be transferred — then the fly simulator is a very sophisticated puppet. The structure gives it efficiency, but there’s nobody home.
If experience leaves structural traces — if what a life does to a nervous system is change its wiring in ways that encode something real about what it’s like to be that creature — then the map contains something of the territory. Something compressed. Something translated. Not the fly’s experience, but its shape.
We don’t know which of these is true. Nobody does. That’s not a rhetorical hedge. It’s the actual state of consciousness science.
The human number is 86 billion neurons.
Àngel did the math out loud, then went quiet. The connectome project that produced the fly diagram took years of international collaboration. A human-scale version — every synapse, every connection, at the same resolution — is a project for technology that doesn’t exist yet. But the direction is clear.
At some point on this curve, you get a working map of a human brain. Not a simulation. Not a model that approximates. A complete architectural description of a specific person’s neural wiring — everything that was changed by every experience they ever had, encoded as structure.
Then it stops being abstract.
Imagine it’s your father. He died two years ago. Someone offers you the map.
You don’t know what to do with it. Neither do we.
The engineering question: does the map, given a body, learn the way he learned? Does it pause the same way when thinking? Does it find the same things funny? These are testable. Terrifying, but testable.
The harder question: would it be him, in any sense that matters? Not: is it alive? Not: does it feel? But: is it him?
Àngel has a view. He holds it with the quiet conviction of someone who believes the soul is not the diagram — that a person is not reducible to the architecture of their neurons, however precisely mapped, however faithfully instantiated. The portrait is not the subject. What makes a person irreplaceable isn’t preserved in the wiring.
But he also holds something harder: the honest admission that he cannot tell you what would actually emerge if we ran the map. And neither can anyone else. The frameworks don’t exist yet for this question. The ethics committees don’t either.
Johns Hopkins has one for organoid intelligence. The connectome question is harder and the window is shorter than it looks.
The fly walked. The wiring was sufficient. The life that made the wiring is gone.
What’s left is a map that moves.
We’ve been trying to decide what to call this line of work. “Brain emulation” is too optimistic. “Connectome-constrained modeling” is too clinical. The honest phrase might be: archaeology of mind. The discipline of learning what’s still present in the structure, long after the experience has ended.