Neuromorphic Bureaucracies: Space, Time, & Cost

Eric Loeb
13 min readJul 2, 2017

--

There are several examples of neural circuits that manipulate the fundamental tradeoffs between space, time, and cost. These limitations cannot be avoided, but they can be mitigated. Our engineers should listen to their brains.

Space/time/cost (S/T/C) tradeoffs are exemplified at a human scale in a trope among contractors: “you can have it fast, you can have it good, or you can have it cheap. Pick two.”. The limitations imposed by S/T/C tradeoffs are easy to find. “Space” can mean a variety of things having to do with complexity: size, detail, accuracy, aperture, etc.; “Time” has the usual meaning; and “cost” can mean money, resources, metabolic costs, etc.

Filter properties of auditory sensory cells

My favorite example of the brain’s manipulation of S/T/C tradeoffs is in the auditory system. My first neuroscience work was in somatosensory cortex, but my first love was for signal processing in the inner ear. There is a clear summary of auditory processing here, which includes the embedded image to the left. The X axis is the log of sound frequency. The 1 on the X axis represents 1 Khz (This is what 1 Khz sounds like). The Y axis is signal attenuation. The zero at the bottom of the Y axis represents prefect transmission and no attenuation. Each line in the graph shows measured responses of auditory sensory cells to sounds of various frequencies. We see that these cells respond somewhat to lower frequency, hit a peak response (the valleys in the depicted lines), and then their responses roll off sharply to higher frequencies. The 60–80 decibels of signal attenuation are like making a noisy city street inaudible. Most ear plugs are rated in the mid-30s decibel range for sound attenuation. The graph shows that the auditory cells can respond across the spectrum, but they typically only respond near their tuned frequency.

Frequency characteristics of the FFT

By way of contrast, engineers convert sound waves into measures of specific frequencies, as shown in the image to the left from Wikipedia. The top of the graph shows a simple sound wave. The bottom graph shows the outputs of the popular Fast Fourier Transform (FFT) of the signal at the top. The FFT gives coefficients for frequency bins, much as the auditory cells respond to sounds in a range of frequencies. Unlike the auditory cells, the engineering approach uses box-like frequency ranges. Namely, the blue line on the bottom shows that there are positive coefficients, representing signal amplitudes, in each of 5 concise frequency ranges (E.G 1 kHz to 2 kHz). The neuronal and FFT approaches are very different. Auditory sensory cells eventually respond to nearly any signal if it is loud enough; FFT coefficients will be zero no matter how loud the signal is, so long as there is no signal in a specific frequency range.

The difference in natural vs engineered approaches to “hearing” represents a clever natural workaround to S/T/C tradeoffs. The FFT faces a tradeoff: narrow bins of frequency require more data and thus take longer to recognize new signals than do wide bins of frequency. We can see the natural workaround to this tradeoff in the strange asymmetric shape of the natural frequency bins. These shapes are very wide in frequency space but not at all boxy. They enable faster recognition of new tones while also enabling precise distinctions between tones. Engineers distinguish two frequencies by making inexpensive and direct comparisons of energy in neighboring frequency ranges. The genome is more extravagant. Instead dividing up the spectrum into a few non-overlapping frequency bins, the natural (but counter-intuitive) approach is to divide the spectrum into a huge number of huge and overlapping frequency bins. Each of these huge frequency bins responds quickly to new signals, as huge frequency bins do. But because of the asymmetric response curves, the brain can also detect precise differences in frequency.

Humans and animals have evolved to detect changes quickly. We respond rapidly but we also have exquisite accuracy. Accuracy must take longer. How do we use just one pair of ears to do both? We rely on characteristics of the populations of neurons that respond to sound. For any frequency F, there will be a lot of neurons firing. Nudge F just a little higher, and there will still be a lot of neurons firing, but some new neurons will have started and others will have stopped. That small difference in the population that’s responding is what enables our brains to tell the difference between F and F plus a nudge.

Do bureaucracies face space/time/cost tradeoffs? Yes. All systems do. “It can be good; it can be fast; or it can be cheap. Pick two”. The “good” in that saying is space: desirable features for a project, complexity, robustness, scale, and so on. In project management, it is most common to see a time-cost trade-off. A “project” is assumed to have fixed scope, and, as everybody knows, increasing the scope of a project or program is likely to increase the cost or time to completion.

How can organizations make use of the clever space/time/cost workaround exemplified by the inner ear?

One neuromorphic workaround can be applied to situations in which there are triggering events. For example, fraud alerts, cyber intrusion and other kinds of risks that simultaneously need fast and accurate onset detection. To implement the neuronal approach, we need our best broadband signal with which to build a fast response. For cyber intrusion, we would build anomalous traffic detectors that operate over many things (many ports, or many files, many data types, users, sub-systems, etc) at once. These wideband anomaly detectors will have more data with which to develop models of normal activity. They will have limited individual ability to identify the source of unusual traffic, but better resolution: with larger data volumes, we can label smaller fluctuations as significant. A bank of these detectors with shifted preferences would implement the natural filtering approach, wherein many detectors will respond to an intrusion and the population density of the detector responses will indicate which ports/files/users/etc are likely sources. Neuromorphic intrusion detection is a topic of commercial interest, but the hype is too thick to know what is really being done.

In the cyber intrusion example, the neuromorphic system used many software agents as model neurons. Is there a useful neuromorphic approach to human management, wherein we organize groups of people like neurons? We cannot expect people to provide the sort of statistical regularity that will let us get more precision with more data. So, how can we use the neuronal tricks for evading S/T/C tradeoffs in bureaucracies?

Example 1: Suppose we wish to do a better job of counter-intelligence against the insider threat. There are millions of cleared personnel, and we want to know in advance who is going to do something damaging with the information they have access to. One neuromorphic approach would be to apply broad, overlapping, standardized data collection. However, to collect standardized data from thousands of people, we would need some kind of automated data collection, like a standard survey. At that point, we’re into a machine technique (Use a bank of analyzers; each takes weighted mean of nearly same set of people with weights giving preference on one side of demographic/topical/etc space). This is not a different kind of human management.

A human version of neuromorphic counter-intelligence could be deployed by an army of trained interviewers. The interviewers would check for warning signs in each person in a set. The interviewers would have overlapping sets, and every cleared person would be in at least two interviewers’ sets. More overlap is better, and people with access to highly sensitive data should be in several sets. If two or more interviewers flag the same person, more scrutiny is applied to that person. I estimate at least 5000 interviewers and 5000 schedulers and other support would be needed at a cost of about $2.5B/year. This would be a large investment. The combined annual intelligence budget is approximately $53B.

It’s rare to find employee types numerous enough to apply neuronal mechanisms. In those rare cases where an employer has millions of specialized employees, there is at least a superficial resemblance between the people and neurons. The people perform their functions in ignorance (by necessity) of what most of the others are doing. The people are distributed organically: despite efforts to put boxy human-engineered org charts around their work, there are too many org charts created by people who aren’t coordinating, so that the net effect is a set of oddball, overlapping shapes.

Organic structure is more resilient, more resource intensive, and more easily created than the engineered structures we think we want. As an example of organic structure, consider US cyber defense. The federal government makes every agency responsible for its own defense, plus there are five agencies with legal cyber protection responsibilities (FBI, DHS, etc ), plus we have the cyber defense work conducted by each of the military services and cyber command. In all of these agencies, we have many people covering the same cyber defense topics and activities. Each person is doing this generally similar work, but from the vantage of their employing agency’s responsibilities. This is how we had two agencies, both the State Department and the FBI, notice a spike in requests for temporary visas for Russian technicians before the 2016 election.

For organizations that cannot hire millions of people, augmentation through AI automation will be the normal approach to neuromorphic bureaucracy. One design approach would be to treat the automation as additional, inexpensive staff positions. This approach would facilitate use of a blend of human and automata filling those roles while the automata are trained to perform the function. Another design approach will be to assume that every human employee comes with some set of standard and in-development automata. In this approach, we pass as much of the human’s work to the AI as possible, while the person oversees the automata and completes the tasks the bots cannots. This approach should lead to banks of specialized AIs feeding higher level work to the humans.

Example 2: Call centers (for customer service, sales, etc) are staff-heavy, required costs for many organizations. There is steady pressure to reduce these costs and call centers are quickly incorporating AI assistance. We can see that both design approaches (treat AI as staff, treat AI as augmentation to staff abilities) are being used. As most of us have experienced, help lines will typically route us through an AI first to determine how to handle our calls. When we finally make it to a human being, that human being uses data-driven tools, which are becoming increasingly sophisticated, to know who we are, help us resolve the issue, and to track the issue across calls.

A neuromorphic call center will use statistics on customer contacts (calls, emails, chats, social media mentions, telegrams, etc) to provide management with a fast and accurate view of the customers. Call center reporting will pick up on subtleties, but the call center will also respond immediately, with hyper-acute sensitivity to important features in the environment, such as a dissatisfied customer. Just as retinal receptors can respond to a single photon while also screening out visual noise, can we notice a significant issue from a single legitimately unhappy customer while screening out the angry callers who are merely taking out their life frustrations on our call center employees?

The neuromorphic approach to the S/T/C tradeoff of speed vs accuracy is to use overlapping resources that do both in aggregate. The call center operators would accordingly have broad topic responsibilities that surround their specialized topic areas. For example, we might have an operator who specializes in widget X of product A; another operator specializes in widget Y of product B; and everybody knows a bit about products A through Z. Unfortunately, in the realm of customer contacts, we cannot easily provide the same signal to multiple operators. If we pass calls around from one specialist to another, we will degrade the signal (customers hang up) and anger our customers. We can provide new operators with the recordings of everything that has happened in the interaction so far, but there is still a start-up cost for each new operator getting up to speed on the call so far. Accordingly, the neuromorphic approach will be to answer each call with a team of specialists.

In the future, when you call to complain, there will be a whole team of people and bots on the line. The team leader will let you know they are all there to make sure your issue is resolved as quickly and painlessly as possible. Once we get used to these initially intimidating conference calls, they’re going to make us feel pampered. I’m looking forward to it.

Example 3: Another area with heavy staffing costs is management. Can we reduce managerial staff by replacing some management functions with AI agents? One design approach is to augment our management layer with many more staff positions that we will populate with AIs; another approach is to consider how we will augment the productivity of a smaller number of human managers with AI sensors and actuators. For example, using the former approach, we may be able to build specialized management AIs with work teams that act as business “fixers”. Where there are problems, the fixers move in to turn around the lagging or failing business unit. However, the fixer team also includes engineers who set up and feed an AI to take on the managerial work that is being done by the other people on the fixer team. Once the fixer team leaves, the business unit will have a stand-alone AI manager or a trained AI assistant for the manager(s) who remain.

While it will be possible to create some management staff positions that are populated by AIs, it is also a certainty that managers will soon be personally augmented with AI sensors and actuators. Managers who do not avail themselves of AI augmentation technologies will soon be out-competed. They can gang up, and for a while the “good-old-human” network will prevail in some places. However, the companies that hire the modern, AI-augmented, higher-productivity managers will win that battle.

A neuromorphic approach to management should have massively overlapping reporting chains. This is not something people are good at. We interact a few people at organizational levels above our own, but typical reporting ratios are 4–20 employees per manager. This situation will change with AI augmentation. The AI support for a human manager can draw on (mostly automated) reports from areas of the organization that are well outside the human’s portfolio. The manager can achieve excellence in some specific function but have broad awareness and respond quickly when the organization faces a new threat or opportunity.

Example 4: A perverse example of space/time/cost tradeoffs in our national politics is the hypothetical waste, fraud, and abuse line item in the budget. Politicians will often assert that we can cut the budget simply by getting rid of waste, fraud, and abuse (WFA). The clear examples of WFA are few. When clear examples are found, they are rapidly eliminated, but they also save relatively little compared to the overall federal budget. It is plausible that at least 5% of our public expenditures go toward WFA, but WFA are murkily entwined with valid expenditures. This is a basic fact of bureaucratic life related to space/time/cost tradeoffs.

The boundary between correct and incorrect expenditure has space/time/cost tradeoffs, of course. For a given level of enforcement (cost), we can take longer time (time) to review or else use more accumulated data (space) about the expenditure. For example, the government has attempted to reduce costs by requiring competition for government contracts. This cost savings comes at the expense of time (months and years), as the processes for submitting, evaluating, and challenging competitive bids plays out. Some of this added time has been shifted to space (staff, data) through contract vehicles that pre-approve certain expenditures by the firms that win those contract vehicles. These contract vehicles reduce the apparent time for purchase of specific items, but require many 1000s of hours of government effort to maintain as a legal category and in support of competitions. The government approval time can also be reduced by pushing labor onto supplicants. The government time and costs to review your taxes are fixed, but if you itemize deductions the system requires more space (data) that you must provide.

The S/T/C tradeoffs limit government efforts to stop waste, fraud, and abuse. No matter what method we use to make and check our expenditures, we will have tradeoffs in the speed of action versus precision for a given level of enforcement cost. We can catch anybody at all faster than we can catch a specific crook. Enforcing in one location (or intellectual realm, law, etc) is less expensive than enforcing everywhere. Building a strong legal case is slower than building a weak one.

The neuromorphic approach to waste, fraud, and abuse (WFA) may be to worry about it less. If we spend $200M to reduce the WFA of a $1B program by 1% ($10M), then we will also probably incur 5% fraud on that $200M, which would be $10M. The cost to diminish WFA by another 1% will be more than the cost to diminish WFA by the first 1%. We rapidly enter the realm of diminishing returns.

On the other hand, unchallenged crime breeds more crime. Challenging crime is a job for the immune system. Biology’s great workarounds to the S/T/C tradeoffs are in evidence in the immune system as well, where there are billions of random antibodies available in small amounts. An antibody is mass-produced (and improved) once it matches to an antigen. We could potentially have many millions or billions of inexact pattern matchers looking through the books. This could be done by random bots or by providing deidentified data to the public. In either case, a potential match would call in a series of bigger guns. If the matched pattern does prove to be an example of waste, fraud, or abuse, then the pattern matcher will be replicated and systematically applied to all transactions.

Space/Time/Cost tradeoff are ubiquitous, and there are many opportunities to apply the brain’s clever workarounds. Neuromorphic bureaucracies will use fast but inexact processing together with slow precision processing. By using both approaches, AI-enhanced businesses and agencies will be able to operate in their environments with speed and precision.

--

--

Eric Loeb

Eric Loeb promotes citizen engagement in politics from the standpoint of his overlapping careers in cognitive neuroscience, politics, and data analytics.