Catching a ball with a homeostat

Aidan Ward
GentlySerious
Published in
8 min readNov 29, 2018
Where will the ball go?

What’s the difference between going deep and drilling down? We look at some modelling approaches that reveal structures built on structures built on structures. These approaches also show how highly dynamic outcomes are built on keeping things stable.

Let’s do the metaphor first. If you are on a boat at night and you see a light then you may want to note the following: if you are moving and the light stays on the same bearing to you, then you are on a collision course and may want to take evasive action. Always remembering the battleship captain who radioed to say he was a battleship and whatever the light was on should keep clear, only to be radioed back by the lighthouse-keeper to suggest another course of action…

The off-putting label is perceptual control theory, or PCT. The standard PCT case relates directly to this metaphor. Suppose a fielder in a cricket match runs to catch a high ball. How on earth does he know where to run, with no time to work it out? Well, it seems that all the fielder has to do is to keep the ball at the same angle to his field of vision and he will be on a collision course with the ball as desired. An athlete moving at speed under pressure succeeds is his highly dynamic quest by keeping something stable. Remembering the warship, of course you may not be the only person trying to catch the ball: I smashed my nose playing rugby that way, loads of blood.

The early work of Warren McCulloch was built on by William Powers, who demonstrated that a control loop like the one used by the fielder is the basis of neurological control of many types, but that such loops are built of many subsidiary loops which in turn are built up in the same way, recursively. Think for a moment about the temperature of our bodies; it is critical that it is held stable by mechanisms such as sweating and such as the closing down of capillary blood vessels near the skin. It is also obvious that other body systems, especially the brain, are crucially dependent on that temperature maintenance. Many supports, many dependencies, not all of them obviously homeostatic maintenance of variables.

In living systems the reference variable for each feedback control loop in a control hierarchy is generated within the system, usually as a function of error output from a higher-level system or systems. — Wikipedia on William Powers.

The nature of such control loops is to cancel out any divergence from a set range of some parameter. There is a demonstration that you can easily do with a friend. Knot two elastic bands together. Put a reference dot in the middle of a sheet of paper. Put your market pen in one elastic band and give your friend a pen in the other band, with the knot over the reference mark on the paper. The instruction to your friend is to keep the knot over the mark. As you trace out a pattern with your pen, your friend must stretch his pen and band in an opposite motion to yours. He will draw mirror image pattern to yours. You can do this as a demo in a lecture — the instructions are not shared publicly and the participants have to guess what they were…

Deep models

The predictions of these models used to simulate neural control are very accurate. And the models can be many layers deep. The implication of a deep model is this: when a basal control loop goes out of bounds or is modified, then many other things will change. A single control loop may be a part of many circuits.

There is a psychotherapy version of this structure where some of our beliefs can be modified fairly independently of others, but some are deeply enmeshed and may imply serious and difficult changes in many areas of life and belief. For some people it is a comfort to know that the deep change and subsequent reconstruction can be mapped out.

This understanding of depth and the foundational nature of some things but not others does not come easy in the world of business and organisations. It is not the same thing as prioritisation or importance: cash flow or profit for instance are important but are a superficial outcome of many other things that are much less visible or malleable. And this is the opposite of drilling down which only finds ever more detail about a narrower field of interest.

Conventionally on a risk register, a risk is noted as having a potential impact on a project. In the risk management system that I built and worked with, risks were risks to the achievement of business objectives and a given risk might affect many objectives. And an objective might be contributed to by many projects. In the language of Larry Hirschhorn in The Workplace Within, any organisation has a primary task even if it is not articulated. And clearly some risks are going to be primary in the sense that they jeopardise the primary task itself.

If we use our ship metaphor again, an organisation sees a light, sees the light, recognises something. Is it on a collision course with some unrecognised risk? Quite possibly, and the way to find out is not to plough on regardless, but to do some experimental changes of course. What shifts? When Russian fighter planes fly into western airspace and Russian submarines penetrate the Swedish archipelago they are not playing. They are seeing who responds in what way. They are looking for where they have excited some sensitivities they might have been unaware of. And they are asking for and checking the significance of what they excite.

You find deep and primary elements of your structure by probing. The behaviour of most organisations is to let sleeping dogs lie. Philip and I rehearsed some of the rich language of avoidance: papering over the cracks, putting a brave face on it, skating on thin ice, building on sand. Without a sense of depth and without a strong grip on the primary task (displaced usually by personal survival and immediate concerns) we regularly miss that we are on a collision course. Really, really the most critical things are not even noticed, especially if they are deep in the sense used here.

The leap of faith

Among the grand old practical engineers of huge systems, the hands-on guys who knew intuitively about stability, there was a saying about the necessary leap of faith. No matter how much analysis and modelling and testing had been done, there is always a leap of faith that something will work in practice and it cannot be avoided. There is no certainty and can be no certainty. This might be Godel or Heisenberg or whoever, but as a practical question these guys like Bill Livingstone knew it in their waters.

This leap of faith also demands a discrimination between deep and superficial. Work that is ultimately superficial may make everyone feel better, that the situation is under control, but if it obscures the nature of the leap of faith it is unhelpful, counter-productive, potentially confusing and dangerous. Who is shouldering that leap of faith and who is going to learn from how it goes? Can we take leaps early and pin down some aspects of system behaviour? Is it possible to deal with deep and foundational things before they are built on?

This example is getting over-used here but it is a perfect fit. Ancel Keys bet the farm on changing the American diet to reduce rates of heart disease. He was wrong and he caused a global epidemic of illness. He had no safety net to re-evaluate the unfolding outcomes and none of the actual clinical trials supported his guesses. He didn’t know that he was looking at something as deep and foundational as he was.

It turns out that people, all of us, have two metabolic mechanisms: one burning glucose and the other ketones. It is too much glucose and the accompanying too much insulin that brings on our western chronic diseases. But by changing almost everyone’s diet, Ancel Keys accidentally ensured that everyone was stuck on glucose metabolism, because he demonised fat as a body fuel. So all the medical research in the last forty years has used a norm of human biochemistry that is itself a problem. And all the results from forty years of research are of dubious value and no-one can admit it. I have tried with the key proponents of “evidence -based medicine” to get them to take a sceptical view of their statistics.

Our claim in this blog is that this is the normal way of human knowledge. There are deep issues which potentially invalidate everything we know, and we don’t look for them! David Bohm in Thought as a System explains that because thought is a system a single flaw can invalidate the whole structure: weakest link and all that. He has a method called Bohmian Dialogue where a group of people build trust in each other over weeks and months of meeting regularly, gradually questioning each other deeper and deeper about the hidden assumptions in what they are each saying. That is what it takes and that is what no one is prepared to do. Why put vast amounts of effort into undermining your beliefs?

Enterprise architecture

Philip is an enterprise architect, a least sometimes. Enterprise Architecture (EA) as a discipline has an intuitive cachet: it would be nice to understand the structure of an enterprise/organisation in some way that made sense of why certain things are connected and certain things are not. Buildings have some logic (not always good or helpful) and maybe organisations can be designed a bit and mapped a bit too. You can gain extensive and serious qualifications in this field and you can wallow in oceans of detail that does indeed need ordering, but nothing in all that says you will find the deep things or a good angle on the primary task.

There seems to be a tension between clarity and power. Almost all organisations qua organisations are more interested in being busy than in finding out what the job is. There has to be a reason why we almost always risk endless error, damage and rework rather than look for the nature of the leap of faith. Lets just get on with it.

I was an observer in the early days of building Heathrow Terminal 5, then the biggest construction project in Europe. The contracts for the main contractors were organised this way. Each appointed contractor put in a cost estimate for doing their part. BAA, the client, deemed that the total of these estimates was 150% of what they were going to spend and the contractors need to work with each other to find the cost savings required.

The contractors loathed with a great loathing being exposed in this way. They wanted to get on doing what they knew how to do and not to have their estimates exposed to scrutiny. They started inventing things to do saying “we have never done this piece of engineering before and therefore we need to do that piece of work in order to get the estimate more accurate. Anti-depth: do the things you know how to do so as to avoid the leaps of faith and the perceived risk of failure. Even if the client is telling you the only thing they want at this point is for you to take the risks. You can lead a horse to water but you can’t make it drink.

Almost as a postscript, my colleague Martin Thomas said ten years ahead of the go-live date that baggage handling was on the critical path, i.e. was deep and foundational. No-one could see how a process like baggage handling could be more critical than building the beautiful physical structures. If you remember back to when the terminal first opened, there was baggage handling chaos. Of course Martin was far from popular!

--

--