# The Analog Way to Compute

How would you like to have a go at a different computing paradigm? A paradigm that is, on the one hand, tried and tested, yet, on the other hand, largely unknown? Enter the world of analog computing, one of the three major computing paradigms: analog, digital and quantum. Each of these paradigms comes with its own strengths and weaknesses. As you may know, for example, quantum computing may put digital cryptography as we know it out of business. Similarly, analog beats digital computing in ways that are quickly gaining in importance. It may soon displace some of our current digital computing monoculture.

In the mid-20th century, analog computers and digital stored-program computers were similarly widespread. Analog computers were programmed by wiring limited numbers of computing elements into circuits on patch panels. In the 1970s, microelectronics came around and put digital stored-program computers on chips. It was easy to see how larger problems could be tackled with larger digital chips and more code. But it was not clear at the time how analog computers could be scaled to tackle larger tasks. Analog computers were not put on chips, and so we ended up with today’s digital computing landscape.

Things have changed since then. The environmental crisis calls for more energy-efficient technologies and a better grasp of dynamic systems. Analog computing happens to be far more energy-efficient than digital computing, especially when it comes to modeling dynamic systems. Increasing cybersecurity threats call for computing technologies that are less vulnerable to the remote manipulation of coded machine instructions. Analog computing happens to work without coded machine instructions. Digital computing now approaches the limits of its growth curve (the limits of Moore’s Law), necessitating much greater parallelism. Analog computing happens to work in a massively-parallel manner by default. Moreover, analog computers can now be put on chips and operated in analog-digital hybrid technology at massive scales.

Accordingly, analog computing is making a come-back, increasing the demand for analog computing expertise. To help people develop analog computing skills, my colleagues and I at Analog Paradigm have developed a high-quality, low-cost, open-source, and not-for-profit cutting-edge analog computer called THE ANALOG THING, or simply THAT. In this article, I show you around THAT, explain its key elements and working principles, and exemplify its use with an application to the engineering of mountain bike suspension forks. We will set up an analog model to advise riders with different body weights and riding preferences on tuning their forks for maximum safety and comfort without requiring excessive field testing. Along the way, you will see that analog computing offers a hands-on intuitive grasp of integral and differential calculus and dynamic systems modeling. I will focus on the straightforward aspects of analog computing and skip over potential distractions to keep things simple.

# Analog Computing is about Modeling Dynamic Systems

In essence, analog computing is a way of creating and using models of *dynamic systems*, i.e., systems that change according to known relationships. Examples of dynamic systems include market economies, the spread and control of diseases, population dynamics, nutrient absorption, nuclear chain reactions, and as we will see below, mechanical systems. Models of dynamic systems are useful for reasons similar to those for which architectural models are useful in architecture and crash test dummies are useful in car safety engineering – models offer insights into matters that would be too difficult, laborious, expensive, or harmful to study in and of themselves.

More often than not, dynamic systems change *in time*. Analog computing, therefore, is about modeling change *in time*, and analog computer programming is, for the most part, a process of translating patterns of time-based change in dynamic systems into patterns of connections between analog computing elements. As an intermediate step, this process requires the pattern of change to be described mathematically in the form of *differential equations*. While solutions of algebraic equations are single *values*, solutions of differential equations are *functions*, i.e., relationships – presentable as graphs – between independent variables (typically time) and dependent variables.

After a pattern of change is described by one or more differential equations, these equations can guide the production of a wiring diagram and, based on the diagram, the wiring of analog computing elements to form an analog model of the dynamic system. Once started, the analog computer will then – instantaneously and effortlessly – compute the unknown solution(s) of the differential equation(s) and output them as time-varying voltages. These time-varying voltages correspond to – they are *analogous* to – the pattern of change in the modeled dynamic system. They can be captured for further processing using analog-digital converters, or they can be studied visually and interactively using oscilloscopes. Some analog computers represent quantities by voltage, others by current. For the remainder of this article, I will focus on how THAT works and mention voltage only.

# The Analog Computer User Experience

Figure 2 shows THE ANALOG THING. With its patch panel instead of keyboard, mouse, and monitor, its user interface differs notably from those of its digital stored-program cousins. The patch panel is divided into groups of analog computing elements. These computing elements are independent electronic assemblies that function as summers, multipliers, integrators, and so on. Each of these computing elements has one or more inputs and a single output, accessible via plug sockets on the patch panel facing the user. THAT is programmed by connecting these computing elements using patch cables – single-wire cables with banana plugs on both ends. The output of each computing element can be fed to one or more inputs of the following elements. The chains of computing elements formed in this way need to form at least one closed loop to allow the solution of differential equations. Let’s take a closer look at the key analog computing elements on THAT.

**Summers**

Each of the four summers on THAT has seven input plug sockets and a single output, which is accessible via two plug sockets. Voltages connected to the inputs are added, and the sum is available at the output. Three of the seven inputs are weighted by factor 10, as marked next to the respective input jacks. Since inputs can be positive or negative voltages, summers can both add and subtract. Figure 4 shows the user interface of one of the summers on THAT, and the corresponding diagram symbol.

**Multipliers**

The two multipliers on THAT each have one input and one output. Voltages connected to the inputs are multiplied, and the product is available at the output. Figure 5 shows one of the multipliers on THAT and the corresponding diagram symbol.

**Integrators**

Each of the five integrators on THAT has five inputs and one output, which is accessible via two plug sockets. Two of the inputs are weighted by factor 10, as marked next to the respective input jacks. Voltages connected to the inputs are integrated over time, and the integral is available at the output. Integration is the process of measuring the accumulated value of one or more time-varying values. Consider water flowing at varying rates into (positive) or out of (negative) a bucket. In this example, the bucket’s fill level is the integral of the water flow rate(s). Integration is the reverse process of differentiation. This is what allows the solution of differential equations, as we shall see later. Integrators have an input named *initial condition* (IC), at which the start value of an integration can be set. In the water bucket example, the initial condition is the bucket’s initial fill level. Integrators typically allow users to control the integration speed across multiple orders of magnitude. This allows scaling the calculation speed. Figure 6 shows one of the integrators on THAT and the corresponding diagram symbol.

Summers, multipliers, and integrators operate instantaneously and continuously. Due to the way they are connected internally around operational amplifiers, summers and integrators invert their outputs. A summer with the inputs 0.1, 0.2, and 0.3, for example, provides the output -0.6. If such an output is required in its non-inverted form, it can be patched as the single input of a summer, which will then act as an inverter. Alternatively, any of the four inverters available on THAT may be used.

**Potentiometers**

Potentiometers are variable resistors by which numerical values (coefficients) can be set. Each of the eight potentiometers on THAT has one input and one output. Figure 7 shows the user interface of a potentiometer on THAT and the corresponding diagram symbol.

**Control Unit**

The control unit of THAT shown in figure 8 drives its overall operation by controlling the time-dependent computing elements, the integrators. Its MODE selector knob allows setting THAT into one of several different states:

- COEFF: In this mode, the value of the coefficient selected using the COEFFICIENT selector knob is displayed on the panel meter, allowing the setting of coefficients.
- IC: In this mode, the outputs of the integrators are set to the initial condition (IC) values applied to their respective IC inputs to set the stage for a program run.
- OP: In this mode, the program set up on THAT is run.
- HALT: In this mode, integration is suspended.
- REP: In this mode, the computer performs the IC followed by the OP mode repeatedly. This allows rendering a steady graph on an oscilloscope. The time spent in OP mode can be selected between 0 and 10 seconds using the OP-TIME knob while the OP-TIME value is displayed on the panel meter.
- REPF: As REP mode, but fast. OP-TIME values between 0 and 100 milliseconds can be selected.
- MINION: In this mode, THAT can be controlled by another THAT that takes the role of a MASTER. In this way, any number of THATs can be coupled together to run arbitrarily large, massively parallel analog programs.

Translating patterns of change in dynamic systems into mathematical representations and further into analog computer programs commonly involves scaling speed and quantities. THAT users can compress or stretch the independent variable *time* typically by several orders of magnitude. In this way, the instantaneous decay of a volatile compound can be simulated slowly enough for observation and interactive manipulation, while population dynamics occurring over decades or centuries can be simulated in the blink of an eye.

Quantities are represented on analog computers in a voltage interval with fixed boundaries called the *machine unit*. For the sake of simplicity, the machine unit is generally thought of as -1 to +1. Programs must be scaled such that their values fit into the machine unit. The actual voltage range with which THAT represents values is -10V to +10V. This range allows for easy conversions in the decimal number system, it can be handled very precisely using affordable electronic components, and it is safe for humans.

Values available via BNC sockets and the HYBRID port on the back of THAT are shifted to narrower voltage ranges. This allows feeding these values to oscilloscope software on digital computers via soundcard inputs. The HYBRID port allows controlling THAT from digital devices like single-board computers to develop analog-digital hybrid programs.

**Mountain Bike Suspensions are Dynamic Systems**

Mountain bikers generally appreciate the effect suspensions have on riding comfort. Not all mountain bikers are aware of the contributions suspensions make to riding safety. On rough trails, bicycles at velocity tend to get airborne for short periods. Hitting bumps, wheels get catapulted upwards; hitting holes, the ground beneath them briefly disappears (let’s ignore deliberate jumps here). With increasingly rough terrain and with increasing speeds, ground contact per riding distance decreases. But mountain bikes are controlled through traction on the ground. Pedaling, braking and steering have little effect when the rubber is up in the air. A lack of lateral traction on the front wheel is particularly hazardous during steering. The faster the ride, the less effective the steering – this cannot be good! Let’s look at how suspension fork engineers can mitigate this problem using analog computers to help riders *tune* their bikes for maximum comfort and safety.

Using some of its travel – its full range of vertical displacement – to sag under the rider’s weight, a suspension fork may compress (travel up) in response to bumps or extend (travel down) in response to holes in the trail. Since traction is maintained by maximizing contact time between wheels and ground, one might think that the suspension should consist of a simple spring that responds as quickly as possible. But that is not correct; again, both for comfort and safety reasons. With an exceedingly soft spring, a suspension is likely to top or bottom out; i.e., to hit the hard upper or lower ends of its travel. Setting the spring sufficiently stiff to avoid this, however, does not yet offer adequate dynamics. A spring that simply responds as quickly as possible when hitting a bump or a hole stores the energy of that motion, releases it on the rebound, over-extends, and performs an up-and-down oscillation. Following a sufficiently large impact, the release of the stored energy would catapult the bike and rider up into the air. What is needed is something to waste the energy that is at work when the suspension moves: damping – the crucial difference between a suspension fork and a pogo stick.

Most suspension forks have two legs – one on the left and one on the right of the front wheel. Typically, the left leg contains the spring, and the right leg accommodates the damper. The spring is usually a piston moving in a chamber containing compressed air. Riders can use inserts to change the volume of this chamber, and, using a pump, they can adjust the air pressure inside it to set the characteristics of the air spring (across a broader range than steel coils can be adjusted). The damper is a piston moving through an oil-filled chamber. It resists motion via viscous friction and converts kinetic energy into heat that dissipates into the environment. Riders can adjust the degree to which the damper resists traveling.

Between the spring’s stiffness, the damper setting, the travel of the fork, riding velocity, ground conditions, and the rider’s preferences and weight (including varying amounts of gear and drinking water), tuning a suspension fork can be challenging. Significantly more challenging is the engineering of suspension forks to accommodate the possible ranges of all these variables and all possible combinations of particular values. It would hardly be feasible to manufacture physical prototypes and test them by permutating these variables in the field. Instead, suspension forks – including designs that exist on paper only – can conveniently be modeled and simulated on an analog computer like THAT. We can mark the independent variable time along the horizontal axis, and the fork’s time-dependent up-and-down displacement along the vertical axis of a two-dimensional coordinate system. Represented in this way, the performance of a suspension fork constitutes a *function of time*. Given the differential equation that describes the relationships in this dynamic system, we can model it on THAT and simulate its responses under various conditions.

**Modeling a Suspension Fork on THAT**

Suspension fork dynamics are based on three elements: the suspended mass, a spring, and a damper. Simulating these dynamics requires modeling the exchange of forces acting between these three elements. Let’s call the force exerted by the mass Fm, the force exerted by the spring Fs, and the force exerted by the damper Fd as shown in figure 10 below. (Note that, owing to the impossibility of notating indices on Medium.com, Fm, Ds and Fd are each single variables, not multiplications of factors).

Upon impact, the suspension is displaced up and down along a vertical axis, which we denote *y*. The displacement changes over time and, accordingly, can be marked along a horizontal axis labeled *t* in a two-dimensional coordinate system. Several relationships in this system are known. At rest, the downward force exerted by the mass is balanced by the upward force exerted by the spring while the damper exerts no force. When the system is in motion, similarly, the sum of the three forces always remains zero:

The force exerted by the mass, according to Newton’s second law of motion, is mass *m* times acceleration *a*:

The force with which the damper resists movement is a damping coefficient *d* times the speed *v* of its displacement:

The force exerted by the spring is a spring coefficient *s* times its vertical displacement *y*:

We know that the speed *v* is the first derivative of displacement over time, which we denote ẏ, and that acceleration *a* is the second derivative of displacement over time, which we denote ÿ. Displacement, speed and acceleration all change over time. For the sake of simplicity, we denote them as y, ẏ and mÿ respectively instead of y(t), ẏ(t) and mÿ(t).

Having noted that the sum of *Fm*, *Fs*, and *Fd* is zero, we can summarize the above three equations in a single differential equation:

Our aim is to compute *y* over time at different values for *m*, *s* and *d*, following perturbations caused by uneven ground conditions. Let’s develop the necessary program as a wiring diagram based on the symbols introduced in figures 4 through 7. Analog computer programs that compute the functions of differential equations are wired as circular feedback loops. To begin (and finish) the wiring of our feedback loop at the highest derivative in our equation, we begin by solving the equation for ÿ:

Assuming for now that ÿ (acceleration over time) is known, we use an integrator to compute its next lower derivative, –ẏ (speed over time). (Remember that integrators and summers change the sign of their output.) We connect a potentiometer to the initial condition input of the integrator. This will allow setting the initial speed of an impact on the suspension fork. The upper connection of the potentiometer may be connected to -1 or 1, depending on whether we want to simulate a bump (compressing the fork) or a hole (extending the fork). At this point, we have wired the blue parts shown in figure 11.

Next, we add a second integrator to obtain the fork’s displacement over time y from –ẏ. This is shown in the green parts of figure 11. Now, we can add the term –(dẏ + sy) by connecting two potentiometers to set *s* and *d* and by adding their output using a summer. We use a second summer to invert the sign of –ẏ in front of potentiometer *d*. At this point, we have wired the blue, green and red parts shown in figure 11. Finally, we connect a potentiometer to multiply –(dẏ + sy) by 1/*m*. The output of this potentiometer is ÿ, which we initially assumed to be known, and feed it back into the first integrator. This gives us the black parts shown in figure 11. THAT can be wired up accordingly, as shown in figure 12.

Once our program is wired up as shown in figure 13, we can select the displacement speed introduced by a severe bump in the road at the initial condition input of the first integrator, set the 1/*m* potentiometer to the reciprocal of a rider’s weight (say 0.0125 for 80kg), and start the computer with a run time of 1.2 seconds in *repeat* mode to visualize a standing graph on an oscilloscope.

With this setup, it is possible to explore (an analog of) the dynamics of the suspension fork interactively by setting different values for *s *and *d* as shown in figure 14:

An ideal combination of these settings uses most of the suspension’s travel and avoids oscillating on the rebound. This tuning, called *critical damping*, is shown in red color in figure 14. It ensures not only a maximum of ground contact and thus traction (as shown in figure 15). It also avoids the potentially dangerous topping or bottoming out of the suspension and reduces the risk of gratuitous oscillations compounding dangerously with further impacts on the suspension.

Mountain bikers expect suspension manufacturers to provide actionable guidance in the vast combinatorial space between the variables discussed here. Suspension engineers can offer this guidance using empirical insights into riders’ weights, riding preferences, and typical ground conditions. With these insights and the analog computer program discussed above, engineers can simulate suspension dynamics to produce recommendation charts and place sag markings directly on suspension legs. For greater realism, they may augment the program with function generators to model non-linear suspension and damper dynamics and replay terrain data sampled in the field.

# Outlook

Capable of solving differential equations, analog computers can be used to model dynamic systems with great speed, parallelism, and energy efficiency at a time when digital computing faces the end of Moore’s Law and increasing cyber threats. Other capabilities of analog computing beyond the scope of this article include the mimicking of neurons in “neuromorphic” AI pattern recognition applications. The use of analog computing is intuitively interactive, experimental, and visual. It bridges the gap between mathematical theory and hands-on practice, integrating naturally with design and engineering practices such as speculative trial-and-error exploration and the use of scale models.

Dynamic systems modeling on analog computers can serve a variety of valuable purposes. It may help us understand what is (models of), or it may help us bring about what should be (models for). It may be used to explain in educational settings, to imitate in gaming, to predict in the natural sciences, to control in engineering, or it may be pursued for the pure joy of it. Analog computer models are generally inexpensive, easily and safely changed, and inherently measurable.

Beyond its capabilities and qualities, analog computing enables better futures. To stay with our example application, it is realistic to expect future suspension forks with embedded analog computers that recognize approaching trail features and adjust suspension settings accordingly in real-time, requiring minuscule amounts of energy that can be harvested from the airflow around the suspension, or the displacement within it. Analog computers implemented on chips will not offer the handfuls of computing elements offered by hand-wired analog computers. Like their digital cousins, they will accommodate computing elements by the thousands. Operating with near-negligible energy requirements, they will accelerate scientific computing and reduce its environmental impact.

It is becoming clear that our contemporary near-exclusive reliance on the digital computing paradigm is approaching its limits. Computing of tomorrow will be in part analog, and those who pioneer this future will be treated to a whole new way of looking at the world. THAT invites you to explore the analog computing paradigm.