How is the brain like a PC? Neither one is the computer you think it is

George McKee
5 min readFeb 12, 2019

--

Part 5 of Is “Is the brain a computer?” even a good question?

It’s too late to change the answer, and it distracts from the really useful questions about the relations between computers and brains. Nevertheless, a deeper look finds that brains stretch the definition of computing, perhaps beyond the breaking point.

This is part 5 of a series of brief essays (sometimes very brief) on aspects of this question. Part 1 contains the introduction and an index to the whole series.

How is the brain like a PC? Neither one is the computer you think it is

Often when people write about the similarities and differences between computers and brains, they refer to von Neumann computers as if they’re something familiar to anyone who’s used a personal computer. In reality, there’s a lot more going on behind the scenes.

What we normally think of as computers haven’t looked like the classic von Neumann architecture for more than 30 years, although the embedded processors of the simplest microcontrollers still do. General-purpose microcontrollers have become so cheap that it’s more expedient for an engineer to just pop one into a design wherever it’s handy rather than going to the trouble of implementing the same function in discrete digital or analog circuit components and fabricating that circuit into an integrated circuit.

The standard description of a von Neumann computer provides it with a unified memory of addressable cells that can store both instructions and data, a control unit that uses a “current instruction” program counter to specify the memory cells to be processed and define the next instruction to be processed, a CPU containing logic to perform arithmetic and logic operations and one or more fast registers to store arguments and results of those operations.

[image from https://www.mpoweruk.com/computer_architecture.htm ]

It’s the way that the control unit and CPU access unified memory provide for the possibility of self-modifying programs that are the essence of the universality of the von Neumann architecture. Other early architectures had separate memory stores for instructions and data and were initially thought to be less powerful than the von Neumann architecture, but they turned out to be universal, too, since they could be provided with programs that created virtual von Neumann computers in software, executing both instructions from a virtual unified memory hosted in the machine’s physical data memory. Decades later, memory architectures that separated read-only instruction memory from read-write data memory turned out to be important for signal processing where they provided performance advantages, and for security purposes, where read-only instructions prevent malicious programs from damaging the entire system.

A modern desktop PC with an external graphics card (GPU) has become so complex that block diagrams typically change scales without notice, because showing everything at the same scale simply makes the diagram unreadably big. Here are diagrams of a GPU and CPU at approximately similar levels of detail.

[Bottom image from http://slideplayer.com/slide/3978939/ , Top image from https://devblogs.nvidia.com/jetson-tx2-delivers-twice-intelligence-edge/ ]

A block diagram of part of the human brain at a similar level of detail looks like this:

[image from http://neuronresearch.net/vision/files/cortexblock.htm ]

Arguments that try to distinguish the brain from the PC by saying that the brain is a distributed system that does things in parallel, unlike a computer that is centralized and does things one at a time in serial order, are simply uninformed. The notebook PC that I’m writing this on is currently running 149 distinct processes in parallel, not counting the 168 threads of image-processing in its integrated GPU and not counting the dozens of I/O microcontrollers and microprogrammable functional units that are visible only to the engineers that designed those subsystems. A modern x86 CPU has the ability to perform a large fraction of its work in parallel using features such as multiple cores, multithreading, instruction and data prefetching and branch anticipation, pipelining, and vector instructions.

The tensor processing unit (TPU) board that Google provides in its cloud for “artificial intelligence” applications contains 65,536 arithmetic units, and Google datacenters contain racks housing hundreds of TPUs each. A high-end supercomputer may contain hundreds of thousands of processor elements and more hundreds of thousands of microcontrollers in its storage and network subsystems.

Hidden in the fine print of the report on the AlphaZero game-playing system is the fact that in its learning phase, the system ran on 5000 TPUs. If you count each arithmetic unit of a TPU as a neuron, that gives AlphaZero’s “brain” over 325 million units, larger than the number of neurons in the brain of a pigeon. On the other hand, if you follow the cable theory of neural processing, then you can count not only synapses but the dendritic and axonal arborizations as processing elements, and match the complexity of a TPU to a handful of neurons, putting AlphaZero merely into the scale of a lobster brain rather than a pigeon brain, albeit a thousand times faster in either case, enabling it to play more games of chess, go and shogi against itself than have previously been played in all of history by all the players who ever learned those games, in just a few days.

AlphaZero and other artificial intelligence systems shouldn’t really be compared to whole brains, though, since they don’t devote any resources to managing locomotion, predator avoidance, food acquisition and feeding, reproduction, or any other of the functions needed for an autonomous organism to survive in an uncooperative world. It might be better to consider them as analogous to a small piece of parietal cortex, or even to artificial brain organoids.

Go on to Part 6

Go back to the Index

--

--

George McKee

Working on projects in cyber security strategy and computational neurophilosophy. Formerly worked at HP Inc. Twitter:@GMcKCypress