What do we really know about our human interface?
Today we are interconnected through high capacity information networks, yet the final element in our network is more often the human mind. While we have been able to precisely define the bit rates and interface specifications of our physical networks, it is nature and evolution that have determined the information absorbing capabilities of our human mind. We know the detailed specifications of every other part of the information chain, but the precise characteristics of our human interface have been largely ignored.
The idea of electronic connections to the mind is popular in both neuroscience and science fiction, and new ways of communicating to and from a human brain do hold great promise for overcoming physical disabilities. However, it is often suggested that bypassing our senses via direct connections to the brain will lead to superhuman capabilities. This is based on the misplaced assumption that our sensors which interface with the world are a bottleneck constraining our brain’s full potential.
We are learning ever more about how our brain behaves and its structure (as revealed in the excellent TV series on The Brain, by David Eagleman). Even so, we still understand very little about how it actually works as a processor of information, so attempts to estimate its internal processing capacity are wildly speculative. What we can do however, is apply a little rigorous information science to the question of our connectivity to the world around us. For example, we can seek to quantify the maximum bit rate of learning simply by measuring what goes in, and what comes out.
Although we are born with sufficient information to ensure our initial physical survival, the success of our species depends entirely on what we learn from the world around us throughout the rest of our lives, so it is vital that we understand the limitations of our ability to learn new information. Today we measure what people know using exams and quizzes, but these tell us nothing about the speed of learning. Instead, they assess what has already been learned, and the speed and efficiency of its retrieval from memory. We don’t record whether an exam grade was achieved through 5 hours of study or 5 years.
In 1948 Claude Shannon defined information in mathematical terms, and how it can be communicated in the presence of noise. What had been considered as quite distinct modes of communication, the telegraph, telephone, radio and television, were then unified in a single theoretical framework. Though initially focused on methods of electrical and electronic communication, he and others soon began to explore its implications across a wide range of fields. Psychologists used this new tool to characterise tasks such as listening to spoken words, reading, typing text, playing random notes on a piano, all in the ubiquitous measure of bits per second. Many were surprised at what they found:
“All the instances in the human organism that take part in processing messages seem to be designed to the upper limit of 50 bits/sec” - Karl Küpfmüller.
This figure seems alarmingly low compared with today’s electronic communication, yet it has received surprisingly little attention. It was easy to ignore. Our personal experience of perception is so rich that few could believe that we absorb so little through our senses. It was generally assumed that such low information rates only applied to our conscious processes and that we absorb information at a much higher rate subconsciously through all our senses.
It was easy to imagine that information pours in unnoticed to reside somewhere within our memory, information that we might access at a later time. This is a seductively attractive idea, allowing us to maintain the illusion that we are intimately and immediately connected with the present world around us even if our conscious mind is unaware of it. However we can find no quantifiable evidence that humans can learn completely novel information faster than a few tens of bits per second. Although we can absorb information while our conscious attention is distracted, I can find no evidence to suggest that it ever adds up to more bits per second can be than achieved by giving something our full conscious attention. You may find the suggestion that we cannot learn at more than a few tens of bits per second profoundly shocking, especially compared with the information rates we expect via our broadband connections. So where is the evidence for this bold claim?
To measure the flow of information into our mind from the world around us we cannot simply monitor what has been learned using electrical connections within the brain, we must rely on the subjects’ use of language to report what has been learned. It is crucial that we only measure the completely novel component of that information. Our species’ most powerful tool is our ability to predict based on previous experience. Very little of what we experience is completely new to us, so when we learn something we are already familiar with the greater part of the information arriving through our senses.
Any valid measurement of learning rate must either use completely unpredictable information, or information for which we can estimate the degree of redundancy. Shannon characterised the redundancy in language and estimated the true information rate to be around just one bit per character, far less than required to communicate random characters (this is what make predictive text possible). When combined with measures of intelligibility versus speed, this suggests a maximum information rate of around 50 bits per second. Even this may be an overestimate as language is highly predictable; we can get the message even when many “wrds or letrs are missng or mispeld”.
To accurately determine any limit of human physiology, just make it a competition. The world’s top physical athletes exhibit very similar performances, and so it is with mental athletes who regularly compete in carefully timed memory and maths competitions. Many of these events require the contestants to rapidly absorb a sequence of random symbols such as decimal and binary digits or playing cards, symbols which can be accurately characterised in the number of bits to define each one. These timed performances can be accurately characterised in bits per second and successive symbols are completely unpredictable, so these world records provide ideal material for analysing learning rates.
However, the competitors are required to perform two tasks: taking in novel information, and memorising it (or performing a mental calculation and memorising the result). The longer the event duration, the harder the memory task, hence shorter events provide a more accurate measure of continuous learning rates. The record for memorising a single pack of cards is 21 seconds, and this corresponds to a learning rate of 14 bits per second. Similarly the record for memorising 560 decimal digits in 5 minutes corresponds to a learning rate of 6.2 bits per second.
Learning rates derived from memory contests are likely to be underestimates, as the learning process is slowed by the difficult task of memorising long sequences. More precise evidence comes from simple mental arithmetic contests where only the running sum needs to be memorised. Here we find very similar bit rates of around 18 bits per second achieved by four different top competitors on four different occasions . Two of these events involved mentally adding 100 random decimal digits, while the other two involved adding ten ten-digit numbers which took ten times as long. Remarkably, these four competitors performing in different events achieved learning rates within a few percent of each other. This suggests that we may be revealing a fundamental limit to the rate that humans can absorb new information. Could this be a consequence of our shared brain physiology? Although we have very different thoughts and mental processes, they do all run on almost identical biological hardware.
Some of the most convincing evidence for our learning bottleneck comes from measuring the speed of physical skills. It might seem odd that the speed of a physical action might be limited by the rate that we can learn information. Although a few physical skills can be performed with our eyes closed, most require us to monitor progress visually, and learning through conscious monitoring is a part of this process.
Paul Fitts of the Aviation Research Laboratory at Ohio State University had noticed that it took the same time to hand-write letters of the alphabet irrespective of their size. This prompted him to investigate a range of human skills and interpret the performance in bits per second . In one of these tests, competitive college students were asked to move a stylus back and forth between two metal strips as fast as they could without making errors. He investigated the effect of different degrees of accuracy required, by varying the width of the strips and their spacing. Crucially, he characterised the physical accuracy as the number of bits required to define the precision. So by measuring the maximum speed that his subjects could carry out this repetitive task he could quantify their performances in bits per second. He also investigated the contribution of muscle strength by varying the weight of the stylus used.
Despite all these variations the maximum performances were consistently around 10 bits per second. If physical ability alone determined the performance, he might have expected a 32 times variation due to accuracy, and a further 16 times variation due to the stylus’ weight and wide variation between subjects. The constancy in bit rate suggests that we have an information processing bottleneck that limits performance. Fitts himself observed that:
“…the performance capacity of the human motor system plus its associated visual and proprioceptive feedback mechanisms, when measured in information units, is relatively constant over a considerable range of task conditions”.
No one has adequately explained these results in terms of our body physiology, but the observations are now enshrined in Fitts’ Law (which predicts that the time to complete an action is proportional to the required accuracy expressed in bits). This rule has influenced the design and layout of the physical things we need to rapidly interact with, such as buttons on web pages or the controls in your car.
However, the observations do make sense if we consider the crucial learning element of a monitored skill. In order to progress each step within the repetitive task the subjects must have learned that they have completed each step, and this requires them to interpret the two dimensional image of the scene they see before them as a three dimensional reality. This significant mental process appears to limit their performance, not their physical fitness or strength which varied between subjects.
These examples suggest that the maximum learning rate for novel information is the result of our mental processing, not limitations of our senses. This learning bottleneck appears to be constrained by the speed at which we can integrate what we sense into our internal model of the world. Of all Earth’s species, we hold the most complex internal world model yet we share similar brain physiology with other intelligent creatures. So paradoxically, we might expect the greater complexity of our model to make this process of integration slower, resulting in a narrower learning bottleneck in humans.
This might be surprising, but there is some evidence. Some very smart chimpanzees in the Primate Research Institute at Kyoto University in Japan have been trained to rapidly memorise numbers (1 to 19) flashed briefly upon a screen. A seven year old male called Ayumu has vastly outperformed the British memory champion Ben Pridmore in the same test. What to a chimp is merely an arbitrary symbol (“17”), to us is a number within a counting system to base ten, a much broader context. However, our human slowness to learn from the present moment is more than compensated by our prodigious ability to predict the future from our past.
The idea that we can only absorb a tiny trickle of novel information provides a scientific explanation for phenomena such as change blindness and inattentional blindness. When we focus our narrow learning bottleneck on one thing, we are inherently blind to other sensations.
This narrow learning bottleneck only applies to that part of what we sense that is completely new information. However, we are already very familiar with the vast majority of what we encounter through our senses. So most of the time we are recognising what is broadly familiar in what we sense. We have an expectation, a prediction based on our previous experiences. We only need to learn the difference between what we expect and what we experience, and generally the difference is small. This is where the power of a few learned bits of information becomes evident. For example, a mere twenty bits are sufficient to identify one in a million familiar experiences. This explains how we can quickly recognise the identity of familiar objects and people despite our low rate of learning.
While our low learning rate is constrained by our mental processing, we can recognise things we are already familiar with much faster, at a rate constrained only by the physiology of our sensors. Our eyes are sensitive to fine detail, but measurements of our visual acuity reveal that only the central degree or so of our vision provides such information. From measurements it has been estimated that our eye is able to recognise visual information at a rate of 6 Mbits per second, nearly a million times faster than the rate at which we can learn. Yet our personal experience of visual perception corresponds to an information rate vastly greater than even this. We are almost completely unaware of our eyes’ limitations, such as the blurring with increasing angle from the centre of our gaze and the blindness of our blind spot.
What does this tell us about our connectedness with other human beings? A recent article in New Scientist magazine described the human race as a single computer. If the human race is a multi-processor computer then it is one based on a multitude of very high performance processors with extremely low interconnectivity. This is not an optimum configuration for such a system, for it limits the ability of processors (minds) to share tasks. This is why each fundamental new idea is inevitably born within a single human mind before being shared; there is only one who cries Eureka! So it is our intellect that separates us, while less intelligent creatures are more suited to collaborative problem solving.
It is vital that we become more aware of our learning bottleneck, for there is little opportunity to increase our interconnection capacity in the future. There is no Moore’s Law for our biology, it is already highly optimised through evolution. Unlike our PCs, there have been no significant upgrades to our processor hardware capabilities since humans first became conscious. We have the same biology and neuron speed (with perhaps a small increase in memory size).
Our maximum learning rate is so low that it cannot possibly describe the richly detailed visual world that we each experience. The surprising implication is that we have very limited access to the present moment, the Now. The world we experience must therefore be a world we imagine, a world based mostly upon what our mind predicts from our previous experiences, a world built slowly from our own lifelong experiences and introspections, a personal universe.
But if everything that I experience is subjective where does this leave objectivity? We can believe and act as if we have direct and shared access to a single objective world, whenever we confine ourselves to areas of our lives in which consensus is found through repeatable experiments and observations (the hard physical sciences for example). Otherwise, when dealing with opinion, politics and religion, we are already familiar with those who appear to be “living on another planet”.
While our learning bottleneck may be real, we cannot function effectively as social creatures if we are preoccupied with the idea of living alone in our own imagined world. So I suggest that you ignore most of this for now, except perhaps to remember to be a little less certain about what we know to be true out there.
March 21, 2016, Copyright © Richard Epworth 2016
 Lam Yee Hin (China), Alberto Coto (Spain), Naofumi Ogasawara (Japan), Marc Jornet Sanz (Spain).
 “The Information Capacity of the Human Motor System in Controlling the Amplitude of Movement”, by Paul M. Fitts, Ohio State University, journal of Experimental Psychology, 47, 381–391, 1954.