Intelligence and Infrastructure
In this week’s essay, I continue my exploration of intelligence and instinct with respect to Henri Bergson and Blaise Agüera y Arcas. Last Wednesday, I took a deep dive into the second chapter of Bergson’s Creative Evolution. We found that instinct and intellect, while deeply entangled and related, end up downstream in evolution as different in kind—they pursue different ends.
In this essay, I want to draw out the difference between instinct and intelligence by diving into familiar examples of organisms as exhibiting forms of intelligence and consciousness. This is a difficult issue to sort out—when bees build hives and organize themselves according to a division of labor, are they exhibiting intelligence? To say yes is to risk seeing a continuum within life itself that stretches from the simple to the complex. (Bergson would call this a difference in degree.) Or, are they exhibiting instinct, which, if we follow Bergson, is a difference in kind from intelligence?
This is a debate that probably cannot be settled with today’s technologies and thought processes. But we can get a better understanding of the differences—whether of degree or of kind—by looking at the problem of infrastructure: what do organisms like bees and ants and humans build in order to have their impact on the world?
My argument is this: bees and ants do indeed build infrastructure, but it is different in kind from what humans build. The former build infrastructures that foreclose possibility because they are expressions of instinct—i.e., the tendency for action to orient to specialized activities and outcomes. The latter build infrastructures that are despecialized—i.e., an open field of possibilities that don’t dictate specific outcomes though it does provide constraints and limits so that possibility is not purely chaotic.
Bergson would call this a distinction between organized tools (specialized) and unorganized tools (despecialized), which is a key difference in kind between instinct (specialization) and intelligence (despecialization).
He was acutely aware of the dangers of trying to think intellect using the biases of intellect. The first three pages of Chapter 3 address the problems of doing so, especially reducing intelligence to a single thing, whether that thing is a function or some set of properties.
What happens is that we are blind to the broader effects of intelligence on the world around us.
Infrastructure Blindness
Let’s look at Agüera y Arcas’s discussion of bacterium.
At the heart of his endeavor in What Is Intelligence? is a basic argument: life is computational all the way down, and insofar as intelligence arises from life it must be computational at its core. To be computational means the ability to influence the future by accurately modeling the self and its surroundings. All models are outgrowths of living things’ attempts to predict what is happening around them and thus to respond to the benefit of the self. Models, therefore, must be oriented internally and externally. An organism must know what it is doing—self-modeling—and have some clue about what is happening around it—modeling the world.
The bacterium moves through its fluid environment by chemotaxis—sensing the changes in nutrient concentration and acting accordingly. It can move forward if it senses increasing concentrations of sugar, or it can tumble if it doesn’t.
Agüera y Arcas wants to see this as different in degree from what all living organisms do: they model themselves and their world through computational processes and act accordingly. A moth avoiding a bat or a sphex moving a stunned cricket are extensions of the same computational modeling process.
This all seems reasonable to me. I have no inherent problem with seeing computational modeling at the heart of life. But to return to Bergson’s concerns, there is a tendency in What Is Intelligence? to double back at the moment when this computational power externalizes itself. There is precious little, if any, discussion of how this modeling extends to an organism’s ability to change the world to make the organisms predictions more effective. To be sure, he delves into ‘reverse causation’ extensively, but stops short when the environment itself can be made into a model of predictability—what I call infrastructure.
In a crucial section titled ‘Behavior, Purpose, Teleology’ based on the Rosenblueth et al article of the same title, he discusses ‘backward causality’:
The apparent paradox of backward causality resembles the apparent paradox of an apparatus (or organism) building a copy of itself. In that case, Von Neumann realized that the solution lay in the apparatus having an internal model of its own structure, and using that model to guide construction. Wiener and colleagues realized that the solution to their problem lay in the apparatus having an internal model of (part of) the world, and using that model to guide behavior.
Before continuing, let’s dwell a bit on the argument. Cybernetics as it developed in the middle of the last century was moving down a different computational path than that traceable through the Enlightenment. The latter focused on Newtonian logic—inputs + rules = predictable outputs. The combination of inputs and rules are the initial conditions, and the running of the rules using the inputs is a form of computation that yields defined outputs. It’s the same as word processing: when I type a ‘w’ on my keyboard, a corresponding ‘w’ shows up on my screen. Any output that differs from that indicates the presence of a ‘bug’.
Cybernetics sought to build predictive systems that could move faster than human intelligence—particularly with respect to building missiles that could lock in on an evasive target and self-adjust. This meant having a model of itself and a model of the target. The interaction of the models would be driven by a purpose—strike the target—which required a ‘negative feedback loop’ that would constantly reaffirm the purpose as circumstances changed.
Because Agüera y Arcas’s interest is in defining intelligence, reverse causation, while opening the possibility of infrastructure considerations, actually forecloses the conversation. What we see is that the relationship between internal and external modeling are just ‘flip sides of the same coin’:
In both cases, the models are computational; and in both cases their purpose is to continue to exist through time, whether by growing, by preserving the integrity or the self, or by replicating. (168)
Couldn’t we equally add to this sentence ‘or by making the relevant environment more predictable by changing it’? As written, the argument doubles back on the internal operation of the apparatus, not on how it arranges the world to make its models more effective and efficient.
The fact that the central examples in this section are thermostats and steam engines should not be lost on someone interested in infrastructure. By doubling back on the internal operation of intelligence, we are encouraged to treat the massive changes to the environment—data centers hogging up the energy and local water—as incidental to the story.
Ants
Simon Conway Morris discusses ants building roads.
In some species there is a remarkable division of labour, in which a rather small number of worker ants leave the nest early, climb the trees, cut away the leaf pieces, and then drop them to the ground. There another group, arriving slightly later, find the pieces, cut them smaller, and carry them to the ‘road’, where a third group of ants transports the vegetation back to the nest.
He continues the image of the ‘road’ at some length:
Travel is often via well-defined and more-or-less permanent paths, which may stretch for considerable distances, sometimes 100 meters. On the main thoroughfares the ground may be cleared to form a smooth highway that in turn maximizes walking speeds. (Life’s Solution, 198-99)
But is this the same kind of road that humans build? His use of scare quotes around his first use of ‘road’ would indicate a hedge. I will argue that it is different in kind not degree. This road is designed to channel and reinforce the instincts of the ant colony, not to create an open field of possibilities. It does not open time to ongoing novelty and innovation.
The assertion of purpose is closer to automation than the opening of possibilities. The road furthers the automated specialization of the colony, which is specialized all the way down. The fact that the infrastructure spreads is incidental, not intentional. Though there may be moments of creativity, those moments are interruptions in the otherwise automatic movements of instinct.
This claim is certainly controversial, and there is no consensus on a boundary between instinct and intelligence. But I would insist on this distinction: we do not imagine some collection of ants writing treatises justifying the division of labor to themselves, just as we don’t imagine them writing critiques of that method of organization because it narrows the intelligence too far. Adam Smith held both opinions during the eighteenth century, and he wrote extensive justifications and critiques of them.
Intention, Instinct, Purpose
We can also see a difference between how constraints function in these different infrastructures. Instinct-based infrastructure is by its nature constrained to its goals and purposes. These goals and purposes don’t seem to open the arrow of time to imagination, deliberation, and selection. If anything like creativity appears, it is in the service of getting back to the purpose, and it would appear in the moment of an interruption of the execution. In other words, at the moment something interrupts the movements, instinct seeks to get back on track. The ants’ infrastructure forecloses possibility because it is an organized tool—it is pursuing purpose without treating purpose as an open field of options to be deliberated and selected.
Constraints and limits govern the predictive models of the ants as natural extensions of a narrow purpose. The function of constraints is to set limits to behavior such that a specific purpose—collective and individual—is achieved. Purpose, we can say, is contained by instinct. This is not to say that intention is completely absent from this instinct. Rather, intention may emerge, but it is momentary and, as Bergson said, arises in a flash as a defect or deficiency of instinct. Its sole purpose is getting back on track.
Intelligence changes the temporal relationship between instinct and purpose by extending the influence of intention. This extension isn’t just delay. Intelligence is not just drawn out instinct. It is different in kind because it changes the flow of causation.
We should think of it as Nietzsche’s wheel rolling out of itself—like the repetition of a somersault or an ongoing inversion. The deficiency becomes useful. How? By turning purpose into the ability to envision multiple futures and to align a sequence of actions to achieve the selected future.
In the ants, we see instinct automatically executing purpose. They are not deliberating and choosing their purpose. If such an event should occur, it would be the result of an interruption, but there is no reason to believe that committees would be formed to deliberate various possible outcomes.

