Shern Ren Tee
Jul 28, 2017 · 2 min read

Oh no no no you need to abstract things out in some way or another! Otherwise neuroscience will run into a problem of infinite regress: supposing you had a computer that could process neural traces as a raw billion-dimensional data structure in anything remotely resembling a human lifetime. Wouldn’t that computer itself really be a brain, or something resembling a brain? Would we be able to understand that computer?

The way statistical physics handles these systems is by reducing systems into sub-systems. In particular, we tend to move very quickly from considering the entire system (in what we call a “microcanonical ensemble”) to a small sub-system of interest (in a “canonical ensemble”) and another, much bigger, sub-system, which is just a “reservoir” of some quantity of interest. A basic example is a heat source at constant temperature: if your sub-system of interest is being kept at 42 degrees Celsius it doesn’t matter whether that heat is provided by burning fuel, hot water, or sunlight on a hot day, and you don’t need to know about the microphysics of that heat source.

From a methodological standpoint that amounts to separating manipulation and observation: you observe the sub-system, and manipulate the reservoir. Then you argue (or hope) that it doesn’t matter whether the reservoir is being naturally or supernaturally manipulated, and since the sub-system is only ever being manipulated “naturally” by the reservoir, you can argue that you’ve observed the natural behavior of the sub-system, and not just its physically-allowed behavior.

So, for example, if you could manipulate the “memory center” to replicate the results of classical conditioning, but then observe the “motor center”, you could begin to attempt to argue that what you’ve observed in the motor center is not just a behavioral pattern but a natural behavioral pattern.

(Apologies if what I’m telling you is obvious!)

    Shern Ren Tee

    Written by