8 Answers from Erik Hoel, PhD

How do you move past reductionism?

Bion Howard
bitpharma.com blog
5 min readJan 4, 2018

--

transitioning Genomic Logic into bitpharma.com:

Dr. Hoel is a postdoc in the NeuroTechnology Lab at Columbia University. His recent Entropy paper titled, “When the map is better than the territory” rejects reductionism. These ideas, and the principle of multiscale thought, are relevant to the work of all scientists and engineers.

Here, are 8 questions and answers regarding his work:

1. Why did you get into research?

I grew up in my family’s independent bookstore, and the books I read gave me the lay of the land when it came to 21st-century science. At some point I began thinking about becoming a scientist, and looking around for what a good field would be. Thomas Kuhn proposed this distinction between “normal science” and moments or fields in which there is a paradigm shift. While there’s nothing wrong with normal science (in fact, it’s the basis for the entire edifice) I was always more attracted to those hybrid zones where the paradigm hadn’t been or was just beginning to be established. As far as 21st-century science goes, there aren’t too many such zones left, but certainly one of them is scientific research into consciousness. So I started studying neuroscience in college with that in mind. Eventually I got my PhD in neuroscience working with Giulio Tononi, who proposed what I consider to be the first scientifically viable (or at minimum, quantitatively developed) theory of consciousness.

2. Who are some of your biggest influences?

I’m influenced by many contemporaries in the field, including my collaborators and colleagues and mentors. Speaking more generally, my influences come from the matrix of ideas specified at the poles by thinkers like William James, Francis Crick, and David Chalmers. On the literary side, my personal anxieties of influence come from writers like David Foster Wallace, Rebecca Goldstein, Italo Calvino, and Jorge Luis Borges.

3. What is causal emergence?

It’s when the causal structure of a system contains more information at some higher scale than at the underlying lower scale. Here causal structure just means the set of causal relationships between variables, which might be mechanisms or states or elements. One way to conceptualize the theory is that you can look at the causal structure of a system at different scales, and the scale that causally emerges is the scale at which the causal structure is the most “in focus.” Since macro variables (like a macro state) can be causally coupled to a greater degree than their underlying micro variables, the amount of influence and information in their relationships can also be correspondingly greater.

4. How is causal emergence related to compression or compressibility?

Normally people think of higher scales as compressions. But if you think this way then immediately it’s obvious that information must be lost, and that at best a macro scale can be the compressed equivalent of a micro scale. Causal emergence instead uses a different analogy: that higher scales are more like codes. The notion is that the causal structure of a system is like an information channel, and that intervening on it at different scales is like using different codes. More generally, it’s all about noise reduction. Down at the micro scale the causal relationships are just too noisy to generate much information or influence, whereas at a higher scale this noise is reduced to a degree that more than compensates for the loss of information due to compression.

5. How can causal emergence aid neuroscience?

As the father of neuroscience, Cajal, pointed out, the cortex is like a jungle. Thinking about the brain in terms of its functional components has turned out to be extremely difficult. In such a jungle it’s not obvious what counts as a state, or what counts as information processing, or what counts as function. It may be, however, that conceptualizing brain activity as the result of individual neurons means there is too much noise to get any sort of causal grasp on what does what. One analogy is to think of a macro state like temperature. The temperature of a gas will remain constant even if you switch the kinetic energy of any two particles. Basically, the individual states of the atoms don’t matter a whit to the overall temperature, and everything is interchangeable. Similarly, it may be that the brain functionally works at the level of neural macro states, where the individual states of neurons are totally interchangeable, and therefore as a scale neurons turn out to be pretty irrelevant for understanding how the brain generates perception and behavior.

6. How can causal emergence aid AI research?

There was a recent discovery that you can fool a neural network (trained as a classifier) into classifying an image incorrectly, merely by adjusting minutely all the pixels of that image. So by doing some small tweak of a photograph of a cat, a tweak that a human might not even notice, suddenly the neural network classifies the photograph as being of a dog. When it comes to human perception we don’t see the pixels, as it were. Such cases don’t fool us because the scale of the world we are interested in is the higher scale, that is, multiply-realizable patterns. We don’t really see the trees, just forests. Research into AI needs to understand that perception and classification often implicitly specify a certain scale, and while this specified scale has to do with the embodied and enacted nature of our perception, it is also tied to how higher scales have a reliable causal structure in comparison to lower scales.

7. What are the pros and cons of macro vs micro models?

Think of it this way: let’s say you want to set up a reliable causal relationship. And you are only given some noisy atomic components. Well, you can organize those such that, while any micro scale causal relationship is still noisy, there are deterministic, and therefore reliable, causal relationships at the macro scale. For such a system it would be a mistake to go in and try to model it at the ultimate micro scale, because you won’t actually be capturing the real causal relationships between the states, elements, or mechanisms of the system.

8. If robots claim to have qualia, will you agree? Why?

On that alone, certainly not. Why?

if question == “Do you have qualia?”:

print(“I have qualia.”)

--

--