Artificial Instinct? Rethinking Intelligence Through Interruption
What is the relationship between computation, instinct, and intelligence?
Photo Credit: dongfang zhao
This question comes first: Is computation a necessary or a sufficient condition for intelligence? I have no issues with seeing 1’s and 0’s all the way down. This would make ‘information’ fundamental if by information I mean 1) the ability for an entity to change state from 0 to 1, and 2) for some other entity to read the state of the other entity and respond with is own sequence of 0’s and 1’s.
That response need not be automatic or predictable, and here is where we find the difference between instinct and intelligence. Instinct treats the response as more or less automatic. Intelligence treats the response as an opportunity to consider options. Both can be considered computational, but there is an important difference between a response that is more or less automated and one that experiences options from which it must choose.
Responding to Interruption: We can better understand this distinction if we ask, ‘What happens when the response is interrupted?’ I’m inclined to borrow Henri Bergson’s description: instinct will treat interruption as a defect or a bug in the system. It will try to overcome it as quickly as possible. If there is a ‘flash of intelligence’ in this interruption, it is deliberating ways to get back on track. Intelligence is present when the negative experience of defect becomes the desirable experience of possibilities.
Time as Orientation to Interruption: If there is a distinction between instinct and intelligence, it has to do with how each orients to the future and thus to time. In fact, I don’t think it’s possible to fully understand intelligence without understanding how it intervenes into the entangled movements that make up time. For example, let’s look at the evolutionary biologist Simon Conway Morris’ characterization of an ant colony:
In some species there is a remarkable division of labour, in which a rather small number of worker ants leave the nest early, climb the trees, cut away the leaf pieces, and then drop them to the ground. There another group, arriving slightly later, find the pieces, cut them smaller, and carry them to the ‘road’, where a third group of ants transports the vegetation back to the nest.
The scare quotes around ‘road’ suggest a hedge. The ants build actual roads, but these are not the same kind of roads that humans build. Our roads, at least since the Enlightenment, form networks of possibilities, not point-to-point channeling of already programmed instinct. If interrupted, the ants might build a new ‘road’, but that would only be the attempt to get back to the division of labor programmed into the structure of the colony.
Our roads, on the contrary, open the social field to options. ‘Where should we go to dinner tonight?’ is a human question that is inseparable from the existence of roads, but it is not a question that an ant could ask. At most, the ant would ask ‘How do I get back on track if my road is blocked’? This question would only arise, if it even can arise as consciousness, in the event of an interruption—a 0 appears where a 1 was expected, or the signal is blurred.
Again, the distinction itself is intentionally blurry. As we stare further into human roads and ant ‘roads’, we will find much to be the same—highly specialized engineering, route planning, efficiencies. Yet we will also find differences, as I have tried to show here. Both are roads, but how they impact the landscape and the experience of the entities who use them is quite different.
Specialization and Despecialization: Our roads are despecialized—what Bergson called ‘unorganized tools’ versus ‘organized tools’ which are specialized. An ant cannot ask of its roads, ‘Where are all the places this can take me?’ because the road is a specialized tool built for moving fragments of leaves from the tree to the colony. Humans can imagine many possibilities all of which are made possible by a network of roads that despecialize movement through the landscape: ‘Where shall we go to dinner tonight?’
Time and Computation: It is easy to see information running through both road scenarios, and therefore to see computations at work in both. But are they the same kinds of computation? Indeed they are both computational, but their relationship to time and purpose is different in kind. Instinct does not deliberate purpose; it executes a given purpose without reflection. If it does reflect and if it becomes creative, it only does so when faced with an obstacle. It does not invent new purposes because it sees interruption as a defect to be overcome. Time out of joint must be reconnected as quickly as possible as a return to the same instinct, the same models, the same patterns.
When interruption lasts long enough for other options to be considered, purpose becomes open to deliberation. By options I mean this: other models of how to behave become possible and the intellect learns to choose among models. Insofar as this capacity emerges from instinct (because instinct is more primal than intelligence), it treats time differently. Where instinct treats interruption as defect, intelligence experiences interruption as a means to seeking other possibilities.
History of Intelligence and Fate: The history of intelligence unfolds as instinct learns to proactively create the interruption and treats it not as a defect but as an opportunity to consider other models and patterns—perhaps even to assemble new ones. This requires injecting contingency into an otherwise automated movement of computational processes.
Throughout the history of intelligence, the concept of fate has represented the limits of our power to control the events around us. The Greeks and Romans had many terms for fate, which I will take up in later essays. But the overarching concern with respect to intelligence has been the ongoing effort to make more and more of the world tractable to our intelligence and therefore controllable.
This is why I offer a definition of intelligence as the adaptive and expansive capacity to make the future less like fate and more like an open field of possibilities.
I’m consciously avoiding a definition of intelligence that focuses on essential features—predictive modeling, computation, et cetera. Not that these are inaccurate definitions, but they do focus on internal features and not on effects. By placing intelligence in the context of fate and necessity, my aim is to show that it has a history and that this history should be read through its effects.
In addition, our historical record is both its chronicle and its artifact. Let’s look at one moment in that record.
Poetry as Road-Making: Wordsworth, at the height of British road making wrote of the work of the poet:
every author, as far as he is great and at the same time original, has had the task of creating the taste by which he is to be enjoyed…. for what is peculiarly his own, he will be called upon to clear and often to shape his own road:—he will be in the condition of Hannibal among the Alps. (Oxford Edition of Wordsworth Poetical Works, 750)
He continues:
And where lies the real difficulty of creating that taste by which a truly original poet is to be relished? Is it in breaking the bonds of custom, in overcoming the prejudices of false refinement, and displacing the aversions of inexperience?
The answer is clearly yes, and this is the work of Imagination, which is the faculty that can restore Taste from passive instinct to active intelligence:
It [Taste] is a metaphor, taken from a passive sense of the human body, and transferred to things which are in their essence not passive,—to intellectual acts and operations. (750)
Wordsworth explicitly ties the metaphorical work of road-making to the innovative power of intellect—the power to intervene into what is given (passive instinct) and send it in another direction.
This act of creation requires introducing contingency into the given. There must be a capacity—call it Imagination—that can despecialize the rules and directives of Taste so as to re-specialize them. Out of that despecialized contingency, something else must emerge. Will it be the creation of instinct or intelligence?
AI = Instinct or Intelligence?: In an interview with Hannah Fry, Demis Hassabis characterizes ‘hallucinations’ as what I’m calling instinct: ‘but it [Gemini and other LLM’s] still sometimes forces itself to answer when it probably shouldn't. And then that can lead to a hallucination.’ (From Google DeepMind: The Podcast: The Future of Intelligence with Demis Hassabis (Co-founder and CEO of DeepMind), Dec 16, 2025).
Hassabis is describing a defect very similar to how Bergson described instinct dealing with an interruption. The defect Hassabis describes is twofold: 1) it should pause instead of getting back on track, and 2) it creates something that is wrong in terms of its factual accuracy. This seems far more like instinct than intelligence. The hallucination is an artifact of instinct.
So are we hallucinating when we call in Artificial Intelligence rather than Artificial Instinct?
Here we find a moment that is computational—1’s and 0’s moving as programmed—but computation alone does not explain how the system is behaving when encountering an obstacle. The LLM soldiers on, just as Conway Morris’ ants would, even if something is created that is factually inaccurate. The purpose is to complete the task, not to pause and deliberate, or perhaps to even raise the issue to the user of being stuck and asking for help.
Yet, hallucinations are not gibberish. They make sense as plausible statements because they are artifacts of computational processes that are following probability vectors. The words are not randomly connected. This would be similar to ants building another ‘road’ as a workaround for the blocked passage.
Hassabis continues his discussion of hallucinations:
How confident are you about this entire fact or this entire statement? And I think that's why we'll need to use the thinking steps and the planning steps to go back over what you just output. At the moment, it's a little bit like the systems are just, it's like talking to a person and they just, when they're in a bad day, they're just literally telling you the first thing that comes to their mind.
Most of the time, that will be okay, but then sometimes when it's a very difficult thing, you'd want to stop pause for a moment and maybe go over what you were about to say and adjust what you were about to say. But perhaps that's happening less and less in the world these days, but that's still the better way of having a discourse. So I think you can think of it like that, these models need to do that better. (From Google DeepMind: The Podcast: The Future of Intelligence with Demis Hassabis (Co-founder and CEO of DeepMind), Dec 16, 2025 This material may be protected by copyright.)
Does this mean that our AI’s and LLM’s are only instinct and will never be intelligent? Of course not. Hassabis’s answer indicates that more engineering is required to make is possible for these machines to pause and deliberate rather than charging forward. But this engineering will need to grapple with the broader history of intelligence if it is to create something more than instinct.

