Making Sense of Sentience

NeuroTechX Content Lab
NeuroTechX Content Lab
19 min readMay 18, 2024

Our brains give each of us the gift of experience, which makes sense of the world. Our experience of the world also guides the actions that we perform within and on our environment. It’s not an exaggeration to state that, without sentience, nothing matters. Efforts to restore, enhance, and modify experience rightly flow from its importance and are, of course, major aims of neurotechnology. In a similar vein, it’s hoped that tools provided by neurotechnology will help us to understand how sentience is related to objective measures of brain activity — and to determine whether experience is possible for an AI.

What do we have to know about sentience in order to realize these goals? Are we presently able to bridge the chasm between objective descriptions of neural activity and subjective qualities of experience? Asking these questions has led us to a testable theory of the aspects of experience which are provided by cortical prosthetic vision. It has also led us to the conviction that we can accelerate progress in the field of neurotechnology and create AI more responsibly by making specific aspects of sentience a focus of R&D.

Adding a new approach to sentience

We want to be clear in our definitions: to be sentient is to have the capacity to sense, to have subjective experience. We begin with this definition because some writers include a great deal more in their use of the term. As humans, we are extremely familiar with sentience because the everyday reality that we know is composed of its various aspects. Unfortunately, that familiarity doesn’t seem to be very helpful when we try to understand how the qualities of experience can arise from physical processes in a brain.

The challenge is deepened by the breath-taking complexity of brain activity that is associated with sentience. For example, more than 50% of a person’s cortex is involved in vision. Neither our science nor our technology is currently able to deal with neural activities and their interrelationships at the numerous scales that may be involved in creating a visual experience. Obviously, things get even more complicated if the goal is to understand how an experience that includes other sensory modalities comes into existence. It’s not surprising that finding ways to test theories of consciousness has become an important research objective in its own right.

The primary visual cortex, Brodmann area 17. Image sourced from Wikipedia (public domain).

Fortunately, neurotechnology is often concerned with particular aspects of experience rather than with sentience as a general capacity. We think that research on specific aspects of sentience should be included in applications of neurotechnology that aim to restore, modify, or enhance those aspects. It’s our hope that focusing on specific aspects of experience will make both theory building and empirical testing more frequent and practical.

If we are to make progress, we will need theories that relate models of individual aspects of experience to objective neural activity. It therefore seems prudent to begin with aspects of experience that are readily amenable to modeling. Theories that use these models will be useful if they allow us to make predictions that can be tested using currently available technology.

Creating the required models and devising theories will require approaches and concepts that lend themselves to the aspect of experience that is the focus of research. It is for this reason that we do not adhere to a strictly computational and information processing approach. Of course, this does not mean that we should avoid using computer simulations where they are feasible. Simulations of neural networks can be extremely useful, especially when we don’t even know if a biological network that is thought to play a particular role is confined to a single area of cortex or is distributed over several different areas.

Each of the preceding considerations plays an important role in what follows. We’ll begin with a first-person approach to sentience in order to emphasize how out of place it seems in a brain.

The mystery of sentience and its consequences

In order to appreciate why sentience seems so mysterious, it’s very useful to think about a real-life situation. So, please imagine yourself in the following scene (in which one of the authors recently found himself):

You’re concentrating on your experience of a row of tiny letters as the optometrist asks, “Which is better, option 1”, … (she changes lenses to alter refraction of the light entering your eye) …, “or option 2?” You don’t see much difference at all. You’re easily able to identify the letters in both cases. But the contrast between the black letters and white background was just a bit sharper in option 1 than it was in option 2. You would rather see the world the way that option 1 presented it to you, and you respond, “option 1 is better”.

The practical value of this common comparison procedure depends on two features of vision. One feature is vision’s capacity to represent the environment in the form of experience. The optometrist was relying on your subjective experience of the eye chart to determine which of the two lens options does a better job in correcting the refractive errors in one of your eyes. She also relied on its other feature, which is its capacity to make information about your experience available to her. Both features were on display when you said that option 1 is better than option 2.

Let’s delve into this a bit more deeply. Objectively, vision consists entirely of neural processes. The role that visible light plays in vision ends in the retina. This very complex structure is layered neural tissue that winds up lining the interior surface of the back of the eye during embryonic development. Creating the content of visual experience begins with the transduction of light energy into changes in electrical potential, a process that’s carried out by the rod and cone photoreceptors in the retina. From this point onward, vision is a neural process that we have learned about by making measurements of quantities such as differences in electric potential across the cell membranes of neurons, and ionic currents that pass through special channels in those membranes. All of this holds as well for other forms of sensation, each of which begins with the modulation of potentials and ionic currents by some form of energy at an interface of the nervous system and its environment.

Artist’s impression, image generated by AI

We can and do talk a great deal about vision both in the everyday language of subjective experience and in the scientific language of objective measures of neural processes. Clearly, the descriptions of visual events that are provided by these languages must be related in certain ways. There must be a way to translate from each language to the other, from “measurements of potentials and ionic currents that are produced by output from the retina”, to “your experience of the contrast between the black letters and white background”, to “measurements of potentials and ionic currents which produce your report that option 1 is better”. The mystery only appears when we actually attempt to comprehend how there can possibly be such a translation.

Obviously, we won’t understand anything if we abandon our knowledge of the objective physical world. For example, we can’t make any assumptions that violate the principle of conservation of energy, or any other physical principles.

Nor can we wish the mystery away by suggesting that experience is a hallucination, that experience doesn’t exist, or that experience is just an inconsequential side-effect of neural activity. These viewpoints are of no help to any individual who suffers from any form of sensory impairment. They are equally unhelpful to those who are attempting to develop useful prosthetic devices and to those who are attempting to determine if an AI is capable of any form of sentience. Understanding the sensory experience is paramount in each of these cases.

The importance of vision among other forms of sentience demands that we think about the consequences of not understanding relationships between objective measures of neural quantities and aspects of experience. One consequence is that our attempts to restore, enhance, or modify experience rest on data that ultimately terminate with unexplained correlations between neural processes and experience. The use of deep brain stimulation as a treatment for depression, the treatment of psychiatric disorders with psychiatric medications, and the attempt to produce a useful and safe visual prosthesis through electrical stimulation of visual cortex are all examples of clinical interventions that failed to connect neural processes and experience, and there are many more similar examples.

The unexplained correlations also mean that no one really knows if an AI that does a good job in mimicking human behavior experiences anything. For example, a recent proposal for assessing whether an AI creates conscious experience is based on indicators that were culled from theories of consciousness. A methodical approach to explaining and formalizing sentience is crucial to addressing these important questions. Consider how these efforts might be refined if we were able to explain how neural processes are related to an aspect of experience that we wish to restore (or to alter or to detect) with the same detail and accuracy that is provided by our explanation of the mechanisms underlying the action potential. That kind of refinement should allow us to tailor deep brain stimulation to the needs of individual patients, to avoid undesirable “psychoactive effects” of psychiatric drugs, and to improve the quality and utility of cortical prosthetic vision. It should allow us to include or to exclude specific types of sentience in an AI.

We want to share our personal reaction to not having this kind of explanation. We find it intolerable, and we hope that you do, as well. The progress made in neuroscience in the past two decades alone is astonishing. A very impressive record has also been produced by research that is focused specifically on perception. Why is it that we have not been able to turn what is presently a mystery into an assortment of scientific and engineering problems? What first step can be taken that will put us on a path to understanding?

We can begin by redefining the mystery in scientific terms: It’s the absence of a theory which explains the physical significance of any given aspect of experience. This term is our shorthand for explaining how that aspect of experience is related to concepts that we use in measuring, modeling, and simulating properties of neurons and their interactions. If we can produce a theory that provides such an explanation, and if that theory is logically consistent and makes testable predictions, then it will provide answers to important questions, such as:

“How can this particular quality of experience arise from — and affect — well-understood neural processes?”

Furthermore, if predictions that follow from the theory are consistently confirmed, and if the theory suggests novel ways of restoring, enhancing, or modifying that particular aspect of experience, then it will be a useful addition to the tool-kit of neurotechnology.

As explained below, we devise a theory of the aspects of experience that are produced by cortical prosthetic vision for several reasons. We’ll begin by illustrating what existing concepts tell us, again using vision to provide examples.

What we learn from existing concepts and data

Our theory of cortical prosthetic vision is based on a practical, useful, and largely ignored perspective on the nature of experience and on the roles that different types of neural networks are likely to play in creating and reporting on aspects of experience. When these two sources of information are considered together, it’s possible to produce a novel and empirically testable explanation of the relationship between a specific aspect of experience and certain neural quantities.

Arthur Eddington. Image sourced from Wikipedia (no known restrictions on publication).

The British physicist and philosopher of science Arthur Eddington, most famous for his confirmation of the theory of general relativity, wrote in The Nature of the Physical World that our scientific knowledge of physical phenomena strictly depends on measurements of quantities that describe those phenomena, using data and methodical abstractions to devise theories and discover physical laws, as opposed to relating them to everyday experiences. On the subject of mental phenomena, Eddington wrote that it would be “rather silly” to not attach them to a framework that provides the necessary condition for their existence, and then to wonder where they come from.

In 1996, Piet Hut and Roger Shepard took a large step toward providing such a framework. Piet Hut is a theoretical astrophysicist who is well known for his interdisciplinary collaborations, and Roger Shepard was a cognitive psychologist who is considered to be ‘a father of research’ on spatial relations. Hut and Shepard published a paper in the Journal of Consciousness Studies which was highly critical of what they termed the “standard scientific approach” to the problem of conscious experience. They compared trying to understand conscious experience in the absence of a background that makes its existence possible to trying to understand motion in the absence of a time background. Hut and Shepard concluded that we should accept the existence of a ‘sense’ background that is on a par with time and space.

Accepting a sense background is consistent with Eddington’s advice and provides a definite direction for our efforts to develop a theory of a specific aspect of experience. In order to accomplish this task, we need to model aspects of experience in ways that allow us to include them in the systems of equations that describe neural networks. Feedforward neural networks seem particularly well suited to providing information on features of the environment that can be used to construct an experienced representation of that environment. They also seem well suited to communicating information on aspects of experience from one network to another. Certain richly interconnected neural networks seem equally well suited to creating the representation. The numerous interactions that occur in a richly interconnected network produce sustained dynamics which could underlie patterns that persist over time.

There is also evidence that is consistent with the possibility that richly interconnected networks are directly involved in creating patterns that are not revealed by objective measurements. For example, on of the authors of the present article once simulated a network in which 1089 excitatory and 121 inhibitory neurons were assigned positions in a fictitious two-dimensional lattice. The strength of each synapse in the network was set equal to the value of a function of the distance between the positions of the interacting neurons on the lattice. Remarkably, more than 99% of the variation in the magnitudes of the effects produced by all 1089 excitatory neurons on all of their targets was accounted for using only two dimensions. The network behaved as though it existed in a two-dimensional world that determined the strengths of synaptic interactions. If synaptic strengths depended on distances in a fictitious three-dimensional lattice, then three dimensions were required to account for the effects of each neuron on all of its targets. Can we use this information together with the notion of a sense background to understand how an aspect of experience can be related to objective measures of neural activity?

Putting these ideas to work

Neurons in your visual cortex receive a tremendous amount of information from numerous sources regarding distances between points in the environment, and between you and points in the environment, via feedforward networks. (One or more chapters in sensation and perception textbooks are typically devoted to descriptions of these sources of information.) Suppose that there is a system of synapses in a richly interconnected neural network in your cortex with strengths that are all a function of a neural quantity that is based on this information.

We’ll use a graphic to establish a connection between this system of synaptic interactions and one specific aspect of your visual experience. The graphic is intended to capture what you might see if you focused on the center of the image: your ability to see detail is greatest where your gaze is directed, and falls off rapidly as distance from that point increases. Now let’s imagine that we strip away everything in your experience of the graphic except for its geometric structure. This structure can be imagined as an invisible fabric which forms the shape of each visual object and the background in which it appears. It consists of geometric points that are contained within visual regions that comprise the shapes of boats, buildings, and the background. It also includes distances and directions between pairs of those points, and between your location and those points. Each direction-distance pair can be thought of as a geometric vector that begins at one point and terminates at another, and provides your sense of direction and distance within visual space. The visual regions are smallest at the location of the boat where your gaze is centered, and become progressively larger as distance increases from that location.

We can now return to the richly interconnected neural network in your cortex with synaptic interaction strengths that are all a function of a neural quantity that is based on information about distances in your visual environment. As neurons throughout the network become active, a single mapping from visual distance to the strength of each synaptic interaction comes into existence. Having accepted the existence of a sense background, we suggest that each interaction thereby engages a set of visual geometric vectors having a common magnitude, and that the actively interacting neurons engage visual regions in which the vectors in this set begin and terminate.

In other words, we are suggesting that all of the information about distances is integrated by a persistent pattern with both spatial and sensible aspects. The spatial aspect consists of cortical regions occupied by objectively interacting neurons and their strengths of synaptic interaction. The sensible aspect consists of the geometric structure of your visual experience. (This provides an answer to the question, “How can this particular quality of experience arise from well-understood neural processes?”.)

If the sensible aspect of the pattern is a faithful representation of the geometry of the immediate environment, then the strengths of synaptic interactions also vary as a function of distance in that immediate environment. Furthermore, a change in the geometry of the environment can produce changes in the strengths of some synapses via feedforward networks, and these changes will cause the distribution of the frequencies of spikes that are produced by network neurons to change as well. In this way, spike frequency distributions can carry information about, and report on, a subjective representation of the world. (This provides an answer to the question, “How can this particular quality of experience affect well-understood neural processes?”.) If behavior that is guided by this information leads to an error, such as a misjudgment of distance, then this error can be corrected by changing the value of a network quantity which maps to synaptic strengths.

Essentially, such neural networks combine form (the subjective representation of the environment) and function (information that guides behavior within that environment) in a single pattern. This pattern differs from other representational models by virtue of its sensible and spatial aspects and the relationships that exist between them.

A testable theory of the aspects of experience provided by cortical prosthetic vision

Our motivation for devising the explanation that was just outlined is a practical one. We want to produce a testable theory of the physical significance of an aspect of experience so that we can improve our efforts to restore, enhance, or alter that aspect of experience through neurotechnology. Our major reasons for choosing cortical prosthetic vision as a test case for such a theory are straightforward: the aspects of experience that it provides are easy to model, and testing a prediction of the theory could provide a way to improve the utility and quality of prosthetic visual experience.

The first cortical visual prosthesis was devised by John C. Button in 1957, and a dramatic demonstration of the utility of this simple device was published the following year in the Radio Electronics magazine. A woman who had been blind for 18 years volunteered to have two stainless steel wires temporarily inserted into her primary visual cortex. When a photocell detected a light source, it activated a square-wave generator which delivered current through one of the wires and into her visual cortex. As current flowed into her cortex, the woman reported having an experience of a fuzzy bright spot, or phosphene.

As expected from the published results of brain stimulation research, the phosphene appeared at a region in the woman’s visual field that was determined by the location of the stimulated region of visual cortex. Using her experience of the phosphene together with her experience of proprioceptive feedback from the direction in which she pointed the photocell, the blind volunteer was able to walk through a hallway in which light sources were attached to obstacles.

A tremendous quantity of information about vision and the visual system has become available in the intervening decades, and experimental cortical prosthetic vision systems have become extremely sophisticated. However, the quality of cortical prosthetic vision has not improved very much. When one considers the poor quality and utility of the resulting visual experiences in conjunction with the surgical procedure that is required to implant electrodes, it isn’t surprising that a sample of individuals who are prospective recipients of retinal or brain implants recently indicated that the perceived risk of a cortical implant outweighs the perceived benefits to vision.

Researchers are very much aware of the need for improvements. It’s expected that training recipients, using microelectrode arrays that can stimulate larger numbers of smaller regions in visual cortex, as well as advances in technology, will lead to significant increases in the utility of cortical prosthetic vision. One such advance is the use of biologically realistic simulations of phosphenes to optimize the appearance of phosphenes.

A useful theory — that explains how phosphenes and the visual space which they occupy are related to quantities in a richly interconnected neural network — could supplement these efforts, by predicting novel ways of creating desired patterns of phosphenes. In particular, it might provide a novel stimulation protocol that would allow us to change the typical patterns of phosphenes shown in the top three panels of the image above into those shown in the bottom panels.

It isn’t difficult to construct models of phosphenes and of the visual space that they occupy. The visual space consists of a visual geometry which includes visual regions that correspond to the columns of visual cortex, and visual directions and distances (vectors) connecting points within and between regions. The experience of a fuzzy phosphene on a region can be represented by an interval of lightness values that are considerably larger than those found in intervals that denote dark regions. Differences in lightness can be modeled by real numbers that correspond to frequencies of action potentials in lightness channel neurons.

A richly interconnected neural network with two systems of synapses should be able to create the lightness intervals on the visual geometry. Simulations of such a network have produced very promising results: evidence of a critical point at which visual geometry and lightness interval patterns are expected to appear, evidence that spike frequency distributions carry information on the expected number and distribution of phosphenes, and classification of spike frequency distributions consistent with the expectation that changing some synaptic strengths can alter activity in visual regions.

These positive results motivated us to propose a means by which a neuromorphic device — that is, a device that is composed of neuron-like electronic components — can alter the sizes and shapes of phosphenes as shown in the bottom three panels of the graphic above by creating a second visual geometry. Once again, simulations provide positive results. We hope that noninvasive empirical tests of such a neuromorphic device with human volunteers will be conducted in the near future. Volunteers can then tell us if the device alters an aspect of their experience as we expect it to.

Where to go from here

Simulations have shown that information on the distribution of phosphenes on visual regions is carried by spike frequency distributions. This suggests that training a simple feedforward neural network to make this information explicit is almost certainly going to be successful. Specifically, neurons in the output layer which correspond to visual regions where a phosphene appears must fire at frequencies that are much higher than neurons that correspond to dark visual regions. Successful simulations must be followed by empirical testing of predictions with human volunteers using noninvasive recording and stimulation methods. Obtaining reports on experiences from human volunteers is required for a critical test of any theory that claims to explain how an aspect of experience is related to objective measures of neural activity.

Applications to other aspects of experience require knowledge of the cortical representation of relevant features of the environment and the construction of models of those aspects of experience. The sensation of pressure on the skin and the representation of the body surface in primary somatosensory cortex suggest that a model of pressure might be similar to the lightness distribution model of cortical prosthetic vision. The representation of sound frequency in auditory cortex and the relationship between objective sound frequency and subjective pitch provide starting points for this aspect of experience. However, modeling pitch is not straightforward. Much less information is available regarding the cortical representation of aspects of experience such as odor. This may be an area of research in which AI can make significant contributions. We expect that modeling many aspects of experience, even within the visual, somatosensory, and auditory systems will require tools provided by areas of mathematics with which most of us are not yet familiar.

Ironically, applications to some AI software may require much less information. If similarity to human experience is not an issue, then both the representation of relevant features of the environment and the model of an aspect of experience that is to be simulated can be designed into an AI system.

Those of us who wish to use neurotechnology in order to restore, modify, or enhance human experience need to stop solving around the sentience problem. The excellent research that is being conducted on prosthetic devices and on brain-computer interfaces obviously must be continued. But these efforts should be supplemented by equally excellent efforts which are based on the physical significance of the targeted aspects of experience. Theoretical research, simulations, and the development of neuromorphic devices that focus on the physical significance of aspects of experience, all grounded in empirical testing, could be a game-changer for many individuals. It’s time for us to begin building more humanity into neurotechnology.

Written by Raymond Pavloski and Nikolaus Pavloski, with editing by Shubhom Bhattacharya and Lars Olsen.

Raymond Pavloski earned a PhD in psychology from McMaster University, conducted post-doctoral research in psychophysiology, and worked in a hospital-based Behavioural Medicine Unit prior to starting a lengthy career in academia. He co-founded and co-owns neurotech startup Inpsyphys.

Nikolaus Pavloski has worked in advertising, finance, and transportation. He co-founded and co-owns neurotech startup Inpsyphys.

Shubhom Bhattacharya is an engineer who works on a clinical trial for vision restoration for a startup in New York City.

Lars Olsen is a regulatory medical writer. He works in the pharmaceutical industry writing submission documents, and has additional experience with medical devices. He has a biology background and is interested in AI, AGI/ASI, and BCI/HCI.

--

--

NeuroTechX Content Lab
NeuroTechX Content Lab

NeuroTechX is a non-profit whose mission is to build a strong global neurotechnology community by providing key resources and learning opportunities.