A New Kind of Science - Stephen Wolfram

Computation in Wolfram’s Framework

Stephen Wolfram defines computation not as symbolic manipulation on a blackboard or a Von Neumann machine, but as the fundamental mechanism by which the universe unfolds. Any process—whether it’s a weather system, a cell dividing, or a neural network firing—follows discrete rules from one state to the next. These rules may be simple, but their outputs often cannot be compressed or shortcut. This is Computational Irreducibility (CI): the idea that the only way to know what happens at step n is to compute all n steps.

Importantly, Wolfram’s Principle of Computational Equivalence (PCE) claims that most systems not obviously simple are computationally equivalent in sophistication. That is, the universe, a brain, and a Turing machine all occupy similar tiers of complexity. This is why PCE and ‘computational universality’ are essentially equivalent concepts: ‘any particular computer system or computer language can always be set up by appropriate programming to emulate any other one’ (NKS 642). Taken literally, what we today call a ‘computer’ is a combination of hardware that can run any possible computation and software programs that run specific rules/computations using the hardware.

Humans are Computationally Bounded Observers—we compute with the same sophistication as the universe, but we are limited in how many steps we can simulate, how much branching we can follow, and how much memory we can maintain. We see time linearly, process causality step-by-step, and interact with only a sliver of the “ruliad”—Wolfram’s name for the totality of all computational histories.

As I’ve discussed extensively before, there seem to be two central features that we as entities in the ruliad have. First, that we are computationally bounded. And second, that we believe we are persistent in time. Computational boundedness is essentially the statement that the region of rulial space that we occupy is limited. In some sense our minds can coherently span a certain region of rulial space, but it’s a bounded region. (Alien Intelligence and the Concept of Technology)

Three Modes of Human Intervention

Wolfram’s framework suggests that while the universe unfolds through vast irreducible computations, bounded observers like us are not helpless. Though we cannot override irreducibility, we can operate within and around it by isolating “slices of reducibility,” leveraging computational tools, and steering trajectories.

The question now is whether there are multiple modes of reducibility. I can imagine three ways humans can intervene in CI to achieve computational reducibility (CR).

Shortcut via Reducing Steps/Effort

This mode represents the classical scientific dream: to find elegant laws or formulas that allow us to bypass complex processes and jump directly to the outcome. These are the slices of computational reducibility Wolfram identifies—rare regions where natural computation behaves in ways that permit simplification.

And in a sense, over the past year, I’ve increasingly come to view the whole fundamental story of science as being about the interplay between computational irreducibility and computational reducibility. The computational nature of things inevitably leads to computational irreducibility. But there are slices of computational reducibility that inevitably exist on top of this irreducibility that are what make it possible for us—as computationally bounded entities—to identify meaningful scientific laws and to do science. — Stephen Wolfram, The Wolfram Physics Project: A One-Year Update (2021)

In practice, this is the realm of Newton’s laws, Keplerian orbits, and conservation principles—compressed descriptions that function reliably in bounded regimes. They do not contradict irreducibility; rather, they describe the tractable eddies within a turbulent stream.

Wolfram will talk about this kind of reducibility as shortcuts:

Most of the time the idea is to derive a mathematical formula that allows one to determine what the outcome of the evolution of the system will be without explicitly having to trace the steps. (NKS, 737)

A shortcut is any attempt to ‘reduce the amount of computational work to be done to predict how some particular system will behave’ (737, emphasis added).

Accelerating the Rates of Computation

When shortcutting is impossible, observers may still gain leverage by running the computation faster than they could unaided—using tools to simulate otherwise inaccessible futures. This doesn’t reduce the necessary complexity but expands our ability to keep up or even to outrun any given progression of computations.

For if meaningful general predictions are to be possible, it must at some level be the case that the system making the predictions be able to outrun the system it is trying to predict. But for this to happen the system making the predictions must be able to perform more sophisticated computations than the system it is trying to predict. (NKS 741)

For consider trying to outrun the evolution of a universal system. Since such a system can emulate any system, it can in particular emulate any system that is trying to outrun it. And from this it follows that nothing can systematically outrun the universal system. For any system that could would in effect also have to be able to outrun itself. (NKS 742)

One slightly subtle issue in thinking about computational irreducibility is that given absolutely any system one can always at least nominally imagine speeding up its evolution by setting up a rule that for example just executes several steps of evolution at once. (NKS 743)

These three quotations illustrate three different ways NKS envisions speeding up the progress of computations to outrun nature. The first involves computations that are of a different kind than we find in the ruliad. They are ‘more sophisticated.’ Elsewhere Wolfram will talk about these kinds of computations as similar to the event horizon of a black hole—we cannot see beyond the event horizon because the laws of physics are scrambled beyond what we can experience. In other words, how would we gain access to these more sophisticated computations if not using the kind of computations that we have at our disposal? These more sophisticated computations would mean breaking the principle of PCE.

Let’s jump to the third paragraph. Here ‘speeding up its evolution’ effectively means quantum computing—‘executing several steps at once’. That’s not how time works in rulial space. Time unfolds as a progression of steps that can’t be outrun.

This leads us to the second paragraph where outrun means doing all the steps faster than nature. Here is where we see the crux of the matter: for Wolfram ‘outrun’ in these instances effectively means outrunning all the computations from a given starting point. All of the branchial steps cannot be outrun, which doesn’t mean that the computationally bounded observer can’t envision purposeful paths and run faster computational systems that compute those paths. High-throughput simulation engines, AI models, and quantum computing all fall into this category. The more we learn (explain) how these processes work, the more we learn how to control the branches that occur and those that don’t. This doesn’t require P=NP, but it does require learning which paths are desirable and which are not, then we align our purposes with our power to control the outcomes.

This sentence from Chapter 2 of NKS seems indirectly relevant here:

Undoubtedly, therefore, one of the main reasons that the discoveries I describe in this chapter were not made before the 1980s is just that computer technology did not yet exist powerful enough to do the kinds of exploratory experiments that were needed. (46).

Computer technology allows for computational progressions to be run faster than if we simply mapped them out on a whiteboard by hand. But those experiments themselves are computationally bounded because we (who are computationally bounded) set up the experiments.

The question is: does something like an effect of CR (or P=NP) become possible in some situations because we can compute and verify possible solutions rapidly? Can we, in other words, get to verifiable solutions (NP problems) more quickly, even if we can’t run all the branches? Would this mean that there needs to be some automated ‘pruning’ of branches as the computer simulation is run? Is this what happens with a GPS?

This leads us to the third possible solution to shortcutting and/or outrunning a computational progression: guiding it through all the possibilities by eliminating some possibilities and promoting others?

Trajectory Guidance through Modeling and Intervention

This third mode goes beyond prediction or speed. It involves guiding the trajectory of a system’s computation by modeling possible futures, shaping inputs, or designing interventions that steer the computational progression toward a desired outcome.

Wolfram does not use the language of “guidance” directly, but his treatment of computationally bounded observers (CBO) and their interaction with the ruliad suggests that the choices made by such observers determine what computations are experienced or realized:

The first crucial feature of us as observers is that we’re computationally bounded: the way we “parse” the universe involves doing an amount of computation that’s absolutely tiny compared to all the computation going on in the universe. We sample only a tiny part of what’s “really going on underneath”, and we aggregate many details to get the summary that represents our perception of the universe.  (‘The Concept of the Ruliad,’ 2021)

…as large, computationally bounded observers we can only sample aggregate features in which many details have been equivalenced, and in which space tends to seem continuous and describable in basically computationally reducible ways. (On the Nature of Time)

We should note several terms that Wolfram regularly uses to describe the experience of a CBO—we ‘parse’, ‘sample’, ‘aggregate’ and ‘equivalence’ (used as a verb). We cannot, in other words, experience all of the computations of the universe as they are happening, but only a ‘slice’ that we parse, sample, and aggregate into coherent experience.

Side Note: This could be very similar to how both Bergson and James described human experience. Perception arises from reducing the complexity of ‘pure experience’ into concepts and abstractions that effectively summarize the rawness of our perceptions into coherent experiences.

If we were a type of consciousness that experienced everything everywhere all at once, we would not experience anything because we would be everything.

From this, we extrapolate: CBO’s not only observe but selectively direct regions of rulial space by setting conditions that favor certain computational branches. In biology, this might mean nudging stem cells into a stable differentiation path. In social systems, it might mean structuring incentives or constraints to contain complex behavior within a desired corridor—e.g., institutions such as prisons, schools, workplaces that make use of laws, policies, rewards and punishments to guide behavior into desired outcomes.

This is not necessarily reducibility, but directionality: the capacity to prune a path through an otherwise branching, chaotic space—not by knowing the future, but by shaping the inputs to prefer certain outputs. Nor does this mean that quantum mechanics and branchial space are overcome. Rather, we are guiding the computations that we control down particular paths that are possible but not likely to be experienced unless guided by our computational interventions.

Is this a shortcut? Only insofar as a road is a shortcut: it sets a direction that makes movement through space and time more efficient because repeatable. We don’t have to re-orient and invent new routes every time I want to go the grocery store.

Together, these three modes—compression, acceleration, and guidance—form the pragmatic repertoire of Enlightenment agency in a computational universe. We do not control irreducibility. But within its horizon, we steer, simulate, and shape what can be known.

The Enlightenment as Rulial Steering

Before the Enlightenment, human life was largely experienced as subject to fate—the inscrutable unfolding of divine will, natural disorder, or cosmic cycles. What could not be predicted or understood was not a failure of method; it was the condition of existence. But with the rise of the scientific revolution and Enlightenment thought, the horizon of what could be known—and therefore acted upon—began to expand.

In Wolfram’s framework, this transformation can be reframed as the beginning of rulial steering: a shift in which humans, as computationally bounded observers, began to map and shape specific pathways through a vast and mostly irreducible computational universe.

Tractability: From Fate to Function

To make something tractable is to bring it within the domain of human reasoning and control—what Wolfram repeated calls ‘human purposes’. It is to turn what once belonged to fate into a problem. In this sense, tractability is Enlightenment’s counterweight to fate: a claim that parts of the world can be predicted, modeled, improved—even when the whole cannot.

In Wolframian terms, tractability is the local taming of Rulial complexity: the creation of slices of computational reducibility within a universe that otherwise defies compression. This is not the abolition of fate—it is its repartitioning. Fate recedes as new domains become accessible to human modeling and action.

“In a sense we can view our whole collective ‘flotilla’ of human purposes as being something localized that moves around in rulial space … one way we ‘expand our reach in rulial space’ is in effect conceptual: by expanding what we understand.” 

“We can think of our technological civilisation as something that is expanding outward in(rule)‑space … as a way of colonising more and more of what is computationally possible.”

And in a sense we can view the whole core trajectory of human progress as being about the expansion of the region of rulial space—and the ruliad—that represents our purposes.

Quotations from “The Evolution of Purpose and the Colonization of Rulial Space …” — from Alien Intelligence and the Concept of Technology (June 16 2022

Q: Can we say that there is a philosophy of the Enlightenment embedded in these statements, especially the last one? As soon as ‘improvement’ and ‘progress’ become conscious and intentional human endeavors—i.e., the Enlightenment—then the desire to colonize more and more of rulial space becomes the philosophy of history. Prior to the Enlightenment, any endeavor in this direction is incidental and not necessarily related to a philosophy of history that sees ‘human progress’ as the goal.

The Enlightenment, then, may be understood not as a sudden break, but as an intensification of humanity’s long-developing impulse to make more of the world tractable. Rather than replacing one cosmology with another overnight, it introduced new methods—and accelerated old ones—for steering the course of events through reason, observation, and mathematical formalism. In this light, we might say humanity began to more systematically colonize regions of rulial space where natural processes could be modeled and intervened upon: medicine extending beyond prayer, navigation becoming less dependent on star charts and more on highly accurate clocks, political economy challenging divine right not through revolt alone but through fiscal and statistical rationality. Tractability became the new horizon—not as destiny’s undoing, but as its redirection through computation.

Rulial Steering: Bounded Agency in an Irreducible Cosmos

This is not omniscience. Even in Wolfram’s system, observers are bounded. They cannot access the full ruliad—the totality of all possible computational histories—but only those slices consistent with their rule frame and processing capacity. What we call “reality” is not the universe’s full computation, but our computational projection through bounded interaction.

Steering, then, is not about predicting the future in full. It is about locating and inhabiting pathways through rule-space that can be shaped by modeling, measurement, and feedback.

In this way, steering becomes a practice of modern fate: not a denial of the unknown, but an art of orientation within it. The Enlightenment is not a replacement of fate with certainty, but a restructuring of what fate means—from cosmic given to computational frontier.

Uncolonized Rulial Space as Residual Fate

From this perspective, the vast uncolonized reaches of Rulial space remain the modern equivalent of ancient fate. These are the domains we have not yet rendered tractable: climate tipping points, cancer differentiation, social contagion, consciousness. Not because they are mystical, but because they remain computationally irreducible, beyond current slices of reducibility.

What once felt like divine chaos now feels like modeling failure—but the affective weight remains: anxiety, helplessness, awe. As Wolfram notes, even simple rules often yield unpredictability. And in those cases, fate returns—not as myth, but as the felt edge of our bounded reach.

Summary

The Enlightenment redefined fate as something negotiable. Wolfram’s framework lets us see this as a structural change in how humans interact with computation: from passive subjects of irreducible processes to active agents navigating selectable pathways. To make a domain tractable is to carve a corridor through rulial possibility. To encounter fate, still, is to glimpse how much remains uncolonized.

P vs NP as a Constraint on Enlightenment Steering

The heart of the P vs NP question is: Can every solution that is easy to verify also be found efficiently? This is not merely a formal question of algorithmic classes—it is a constraint on human steering capacity.

P = Problems for which the trajectory to the answer can be found quickly (in polynomial time).

NP = Problems where verifying a proposed answer is fast, but finding that answer may require exponential brute force (unless P = NP).

CI implies that in most natural processes, the trajectory cannot be shortcut. Therefore, P ≠ NP aligns with the structure of reality as Wolfram sees it: most problems are hard to solve not because we’re dumb, but because the universe resists compression.

However, humanity’s progress under this constraint has been ingenious. In effect, we’ve learned to:

Simulate NP processes faster (mode 2).

Structure environments to avoid worst-case NP hardness (e.g. design decision trees, modular systems, fault tolerance).

Guide systems into tractable zones (mode 3)—even if the full solution space remains exponentially large.

In cancer, logistics, or public policy, we rarely solve NP problems fully—we channel them. The Enlightenment thus marks a turning point: from being passengers on nature’s irreducible ride to drivers shaping short deterministic paths through otherwise intractable multitudes.

Conclusion: Colonizing the Ruliad

Wolfram’s cosmology offers a vision in which humanity, bounded but sophisticated, acts not as omniscient predictors but as cartographers and pilots of computation. The Enlightenment’s legacy—its statistical reason, experimental rigor, and technological ambition—can be reframed as the first coordinated campaign to inhabit Rulial Space.

We don’t break irreducibility. But we map its contours, we find oases of predictability, and we train machines to accelerate and refine our guidance.

Just as the early moderns built roads and clocks to synchronize cities and states, we build simulations, algorithms, and control systems to synchronize the vast, branching computations of nature to our bounded capacities.

In this framing, P ≠ NP is not a wall—it is the terrain.

The Enlightenment is how we learned to walk it with purpose.

Next
Next

Juvenescence - Robert Pogue Harrison