(2025-03-20) Hoel Now Published My New Theory Of Emergence
Erik Hoel: Now published: My new theory of emergence! I’ve just published a paper sharing it (available on arXiv as a pre-print). It outlines a new theory of emergence, one that allows scientists to unfold the causation of complex systems across their scales.
why does science need a theory of emergence?
Because almost every causal explanation you’ve given in your whole life about the world—like “What caused what?”—has been given in terms of macroscales. Sometimes called “dimension reductions,” they just mean some higher level description of events, objects, or occurrences
In fact, most of the elements and units of science are macroscales. Science forms this huge spatial and temporal ladder, one with its feet planted firmly in microphysics, and where each rung represents a discipline climbing upward.
This entails a tension at the heart of science. Scientists, in practice, are emergentists, who operate as if the things they study matter causally. But scientists, in principle, are reductionists
So then how can that macroscale description matter? Why doesn’t causation just “drain away” right to the bottom, and there’s no real way for anything but microphysics to matter?
This problem keeps me up at night. Literally, this is what I lie awake thinking about. Years ago, in my paper “When the map is better than the territory,” I sketched an answer I found promising and elegant: error correction. This is a term from information theory, where you can encode the signals along a noisy channel to reduce that noise. Your phone works because of error correction.
And this added error correction just is what emergence is.
If you want a popular article explaining this idea, you can read this old one here in Quanta, featuring my earlier work
Well, I think macroscales of systems are basically encodings that add error correction to the causal relationships of a system.
But, while the conceptual understanding was there, I always felt there was more to do regarding the math of the original theory. Holes and flaws existed. Some only I could see, but sometimes others did as well
not everyone was convinced of the original theory, due to how the initial math worked; particularly the measure of causation, called effective information, that we initially used, and that this new version of the theory moves beyond
So this new version, which radically improves the underlying math of causal emergence by grounding it axiomatically in causation, making it extremely robust, and also generalizes the theory to look at multiscale structure, has been a decade in the making.
I’ll merely point out one interesting thing that I purposefully don’t touch on in the paper, which is that…
Causal emergence is necessary for a definition of free will.
A theory of emergence has practical scientific value, and this is what the research path should focus on: making causal emergence common parlance among scientists by providing a useful mathematical toolkit they can apply and get relevant information out of (like about what scales are actually causally relevant in the systems they study).
But it’s also obvious that, if you simply turn the theory around and think of yourself as a system, the theory has much to say about free will. The many implications of which are left as an exercise for the keen-eyed reader, but here’s an early hint:
This new updated version of causal emergence would indicate that you—yes, you—are a system that also spans scales (like the microphysical up to your cells up to your psychological states). Importantly, different scales contribute to your causal workings in an irreducible way. A viable scientific definition of free will would then have a necessary condition: that you have a relatively “top-heavy” distribution of causal contributions, where your psychological macrostates dominate the spatiotemporal hierarchy formed by your body and brain. In which case, you would be primarily “driven,” in causal terms, by those higher-level macroscales
What’s next?
I think this research even has important implications for AI safety, as things like understanding “What does what?” in dimension-reduced ways is going to be important for unpacking the black boxes artificial neural networks represent.
Edited: | Tweet this! | Search Twitter for discussion

Made with flux.garden