Neuroscience Primer

Thinking about the brain as an emergent computational engine.

Zach Wolpe
The Startup
12 min readSep 18, 2020

--

This article, heavily inspired by Robert Sapolsky, details one way of thinking about the brain as a dynamic, computational engine — from which arbitrary complexity can arise. Thinking about cognition as a Complex Adaptive Systems (CAS), or decentralized networks, can yield insight into some of what makes us human.

Here’s a basic neuroscience/neuroanatomy primer to set the framework for this type of thinking.

The goal of this article is to think about the brain as a computational system, from the perspective of science & engineering.

1. Neuroanatomy

Here we outline some core ideas in the neurological literature that will provide a necessary theoretical baseline to design our computational framework.

1.1 Neurological Localization

We begin with neuroanatomy — concerned with the structure of the nervous system — & thereafter work towards a more general interpretation of the nervous system.

Leading neurological and psychological studies make almost irrefutable arguments that the brain's physical anatomy is deeply intertwined with the mind:

the brains describing the physical configuration of the body's neuroanatomy whilst the mind is concerned with the emergent properties of the self.

Despite the brain often being described as a general-purpose information processing machine, it is composed of areas of advanced specialization and localization. The last century has seen neuroscience explore the implication of missing components of the brain — horrifically mostly due to studying the victims of wartime accidents, or detailing a number of famous case studies like Phineas Gage — all of which emphasis evidence of neurological specialization.

Intuitively, given the nature of evolutionary systems, the brains of more sophisticated animals are not entirely different from that of less intelligent animals. Instead, it is as if more archaic systems are nested in more specialized, elaborate, complex neurological systems. The most ancient, general, and universal regions of the brain are responsible for a primal function like olfaction whilst more specialized, newer regions allow for higher-order cognition like thought, feelings, reasoning & prediction. Modern neuroscience describes a basic model of the neuroanatomy of the human brain as comprising of 3 nested regions:

  1. The Reptilian/old Brain
  2. The limbic system
  3. Cerebral Cortex

1.2 The Reptilian Brain

The oldest, simplest & most primitive part of the brain is the Old/Reptilian brain. It’s primarily concerned with keeping the body operating & is generally detached from higher-level cognition.

Most of what feels automatic in our nature is a product of the old brain, detailed in figure 1 below. Automated controls, continued functioning of the heart and lungs & sensory functions. The Cerebellum (Latin for little brain) is also a part of the reptilian brain.

The Cerebellum is responsible for movement fluidity, balance, and motor learning (such that errors in movement will not be repeated in the future). It’s also associated with modulating emotions and the perception of time. Interestingly enough the Cerebellum is readily impaired under the influence of alcohol - which is no surprise if one considers the behaviour of those who are intoxicated & contrasts this with the functions of the Cerebellum.

Essentially the old brain dictates the primal functions that any animal might need to survive.

Figure 1: The functionality of the reptilian brain.

1.3 The Limbic System

Layered over the reptilian brain lies the Limbic system, which is responsible for a variety of more complex functionality: such as the Amygdala that is responsible for memory consolidation and emotion; as well as the Hippocampus which is central to learning & memory.

The Hypothalamus: which is not only responsible for governing hunger & the endocrine system but also for the ability to feel pleasure & reward; is also a constituent of the Limbic system. If damaged, an individual can lose their ability to retain new facts & form new memories.

An interesting fact regarding the nature of the Hypothalamus was uncovered when examining rats. In an experiment: when given a way to stimulate the Hypothalamus - & thus deliver a reward signal - rats opt to stimulate themselves until they collapse or die. Indicative of the nature & influence of these systems.

The workings of the Limbic system are described in Figure 2.

Figure 2: Constituents of the Limbic System

1.4 Cerebral Cortex

The Cerebrum — left & right hemispheres of the brain — makeup around 85% of the brain’s weight & oversee one’s ability to think, speak & perceive.

The Cerebral Cortex, which covers the cerebrum, comprises of a thin layer of over 20 billion interconnected neurons. This is the general-purpose matter that we concern ourselves with when attempting to model a general-purpose modular computational system. The Glial cells provide a web of support that surrounds, insolate & nourishes these cerebral neurons.

The Cerebral Cortex is subdivided into 4 lobes: the Frontal lobe is involved in speaking, planning, judging, abstract thinking & personality aspects. The Parietal lobe is responsible for one’s sense of touch & spatial location/body position. The Occipital lobe processes information pertaining to sight. Finally, the Temporal lobe is concerned with processing sound & thus speech comprehension. These segments are depicted in Figure 3.

Whilst somewhat superfluous to our needs, describing the structural form of one’s neuroanatomy provides a framework for thinking about the modular, hierarchical nature of natural cognition. Now that we’ve detailed this framework, we can zoom in to the mechanisms of the nervous system.

Figure 3: Cerebral Cortex.

2. Neuroscience Fundamentals

A basic cell in the nervous system, that is colloquially referred to as a brain cell, is a neuron. First, we consider the behaviour of a single neuron in isolation, thereafter turn our attention to the more interesting interaction between neurons.

2.1 Single Neuron Diagnosis

For brevity, we negate the workings of the glial cells — which play an important role in dictating the structure of communication channels between neurons — & instead focus entirely on the neuron. As an aside, the glial cells can be thought of as defining the hypergraph nature of our computational engine.

Neurons are fundamentally unique when contrast with other cell-types, beginning with their asymmetrical, peculiar form.

Most of what neurons do is talk to one another. Dendrites act as neurons ears, axon terminal as their mouths (see Figure 4). Neurons simply receive and pass signals to one another. When sufficiently excited, neurons pass information to one another.

2.2 Neurological Charge

Neurons don’t lie in a neutral state, they are either positively charged — if omitting a message — or negatively charged if silent. Importantly If a message/signal is passed from one neuron to the next sequentially it dissipates overtime — delivering a smaller voltage of excitement further away from the source of the excitement.

Neurons default a state of negative charge — referred to as the resting potential: having some negative charge. Each message stimulates the node raising it’s voltage to some degree. If a particular threshold is met, the neuron fires, turning from a negative to positive charge: changing from a resting potential to what is known as an action potential — thus turning to a negative charge & passing the message on to its axon terminals.

These strong contrasts are of the utmost importance when thinking of neurological functioning. This binary, non-neutrality, allows for distinct pattern formulation.

2.3 Network Topology

Any given neuron has it’s axon terminal leading into any other number of neurons. Similarly, it’s dendrites ’listening’ to any number of neurons. As such, for a neuron to switch from a resting potential to an action potential it needs to be sufficiently stimulated — by a sufficient number of connections to its dendrites. This brings forth another important fact:

the more neurons a particular neuron projects too, the more neurons it can influence. However, the more neurons it projects too, the smaller it’s influence on each.

This allows for any variety of specialization, localization, niche, or general communication between neurons. One can readily imagine how we could encode a neuron firing as a binary sequence that is probabilistically reliant on the messages/stimulus it receives. As such the brain is wired in networks of convergent & divergent signaling.

Notably, the threshold used by each neuron, that initiates a transition from resting to action potentials, is adaptive & changes over time. This threshold is thus a function of hormones, experience & other biological factors.

2.4 Communication Tracks

Recall those glial cells that we conveniently ignored, some of them play an important role in the transmission of neurological signals. Figure 4 depicts a box-like cellular structure around the neuron axons. This is a special type of glial wrapping called myelin sheath. This process of ’myelination’ strengthens the connections between neurons, allowing messages to be transmitted more readily — improving communication & thus correlation between neurons.

Given this myelination, dynamic, adaptive structure, one can begin to imagine how computation changes the anatomy of the brain, which in term changes the computational routes. Thus a cyclical, positive enforcement mechanism emerges.

Finally, consider the size of the human brain. Given the complex malleable structure of the brain, the permutation space of co-dependencies that the network is capable of is astounding. Leading research suggests that each neuron has around 10′000 dendritic & axon connections. With approximately 100 billion neurons in the brain, it’s no wonder that this computational machine is capable of emergent consciousness & intelligence. Linking back to our computational science community.

2.5 Neurotransmitters & Neuropharmacology

Notice in Figure 4 the axon terminal of the first neuron does not actually touch the dendrites of the second. This is not a flaw in the imagery, neurons to not touch, but instead communicate their signal via a synaptic gap. Neurotransmitters signal from one neuron to another, allowing the message to pass. This allows for an added dimension of complexity in our network topology.

Figure 4. Neuron Anatomy.

Neurotransmitters transmit signals via synaptic uptake - a synapse being the gap between axon terminals & neighbouring cell dendrites (again visible in figure 4). Neurotransmitters have a particular structural form, thus are only received by dendritic cells of the correct structure. Since only certain neurotransmitters attach to the receptor, this encodes information about the type of neurotransmitter (thus the information).

Notably, this allows for the neuron to send inhibitory neurotransmitters - decreasing the charge of the neuron in a polarized fashion & thus decreasing the likelihood that the neuron will fire.

It also allows for a great deal of specialization, as different messages are encoded in the structure of various neurotransmitters. This is, in a large part, how neurotransmitters work - flooding the blood-steam with artificial structural neurotransmitter substitutes that either block or bind specific receptors. Thus, in-tern, passing on the relevant encoding & forcing the brain to respond in the necessary fashion.

As an illustration, Prozac - the modern antidepressant of choice - does exactly this with the serotonin. If a more grim example is to your liking: Amazonian tribes utilize Acetylcholine, a potent component of their favourite poison in which they douse every dart before use, to inhibit the synaptic receptors of their victims. These particular receptors are responsible for the contraction of the diaphragm. Exposure to Acetylcholine causes on to stop breathing.

3. Neurological Complexity

Now that we have a basic framework for neurological activity, one can begin to imagine how neurons can couple & to produce vastly complex behaviours.

First, recall that some neurons pass on inhibitory messages (decreasing the likelihood of action potentials). As such if two neurons, say A & B, synaptically precede neuron C and although A passes a positive charge to C, B passes a negative charge, B is said to have a neuromodulatory effect on A.

The brain is built on the reliance of great contrasts, this conflicting nature of neurological communication amplifies the effects of this.

One coming from a computational background can think in terms of probabilistic logic gates.

Additionally, think of the permutational space as the number of edges in our graph (dendritic & synaptic connections) grows exponentially in the number of nodes (neurons) in our graph. Thus understanding the brain as a network/graph — even in a static state-provides some insight into how vast the applications.

Neurological communication can thus be thought of as circuits, passing negative & positive charge to form complex conditional statements. Here we provide two examples of neurological circuitry for illustration.

4. Two types of Pain

To illustrate how effective circuitry can produce specialized behaviour, consider two types of pain:

Figure 5: Adaptive Pain Circuitry.

Consider the circuit presented in figure 5. This elegantly displays why we feel too distinct types of pain. Neuron A’s dendrites lie just under the skin — it’s action potential responds to painful stimuli. Neuron A stimulates neuron B, letting your body know you’ve experienced something painful. Neuron A also, however, stimulates neuron C, which sends a prohibitory message to B. Result? You neuron B fires for a short while but is briskly silenced — you feel a sharp pain (like the prick of a needle).

Subsequent to this, neuron D is also dendritically lies under the skin, however, it responds to slightly different pain stimuli. When an action potential is passed from D to B, D also inhibits C. So unlike the sharp pain felt when A was activated, D causes a throbbing, longer-lasting sensation (like a burn).

This simple example is indicative of the types of complex behaviour even simple circuity can achieve. Now consider how this scales? How the applications scale with respect to the nodes & connectivity of the network. Great specialization & abstraction can undoubtedly be achieved.

5. Creative Circuitry

Now we introduce a fundamental property the brain, abstraction. Suppose, referring to figure 6, neurons 1 through 5 fire when exposed to the connected images. Neurons 1 & 5 can learn specialized functions, reacting to a particular image/pose. What do neurons 2 through 4 learn? They abstract. They may learn a representation of a Victorian man or man in general. The detailed abstract concepts — a necessary component for complex cognition.

Figure 6: Concept Abstraction.

Neuron 3 is said to be the convergent center of this network. As such is it the most general, allows for great concept abstraction & is the recipient of many peripheral elements.

This introduces the concept of associative networks, allowing neurons to learn correlations & dependencies between seemingly independent concepts.

As a final consideration: the spread, density & nature of our individual association networks play a large role in defining us. How we think, largely a byproduct.

For example, suppose I ask you to think of a face?

Any number of illustrations might come to mind. No conscious effort is given, the mind simply derives meaning based on neurological wiring. Any number of images (human, animal, abstract symbols, etc) could resemble a face. We are not able to locate where, but there is a point where sensory input strays too far from what we can comfortably call a face.

Now, what about the face depicted in the Picasso piece shown in figure 7? Picasso was no stranger to painting abstract, far from ordinary, faces.

One might say these images were outside of the normal domain of thought that defines what a face ought to be. So what might the consequence of an atypically wide associative net of neurons be? Perhaps, creativity.

Final Thoughts

We outlined a way of thinking about the brain as a:

  1. Complex Adaptive System (CAS)
  2. Hierarchical graph/network
  3. Bayesian network/probabilistic logic gate circuit

These foundational components allow one to consider how abstraction, creativity & specialized computation might emerge from simple constituent parts operating at scale. We also considered how the computation & physical topology are interconnected, reliant on each other forming messy positive feedback loops.

In Part 2 I’ll detail how neuroanatomy & neuropharmacology & be used to thinking about phycological learning theory.

Many computation science types are deeply inspired by the complexity of biological systems, myself included. I hope to contribute to closing the gap between these two beautiful disciplines.

--

--