On the plonkiness of IT

Aidan Ward
GentlySerious
Published in
10 min readMay 29, 2018

Aidan Ward and Philip Hellyer

Credit: thinkspace.csu

Compare if you will the subtle beauty of a good epithet and the necessary plonkiness of a piece of software. An epithet is capable of meaning all things to all people, of a chameleon shift of mood to match its surroundings, whereas a piece of software is unbearably static. For me this is the difference between breathing and suffocating. And for you?

These two windows on the world each have a matching Weltanschauung.[1] For the first, the world we see is deeply informed and coloured by what we already hold to be true. For the second, the world just is, and is out there. It is plonked, and we deal with its plonkedness, or not.

You can be paid serious money for writing software and although there are wordsmiths who make a decent living, they tend to have to make it on their own, not as employees. The world of the corporate behemoths holds plonkiness in high esteem and will insist that their Weltanschauung is the only one, and that their profits vindicate its truth.

Answers are contingent

There is a lot of software in the world, and its quantity and importance are growing. I read a report about Stuxnet as the pinnacle of the craft: What is the most sophisticated piece of software/code ever written? It seems that the true use of plonkiness is to undetectably destroy uranium centrifuges.

Unpredictability in software action is generally pretty undesirable. It should do what you expect it to do. But that simple statement implies that someone has answered, definitively, some questions about what exactly it should do. When you get to shake hands with the Queen she will ask you “And what exactly is it that you do?” (Get the accent right, please.) The Queen asked the economists at the British Academy why they had not predicted the last great crash: “Oh, no-one predicted that, Your Most Gracious Majesty”. (Except they were wrong there too…) The power of a good question with some status behind it!

The debate about so-called artificial intelligence is not very intelligent but it indicates awareness of a shift. To focus for a moment on the answers to the questions that must be asked: Are they static, as the software must be? Are they obvious in principle? Well, we would list at least:

· The answers may change with time: what is true today may not be true tomorrow

· The answers given in response to questions may have never been true, or may be a true answer to the question that was heard, not the one that was asked

· The answers may shift depending on what people think the context is for the question, that is they may be highly contingent on other answers

· The articulation of an answer is itself a problem, finding out how to say something in such a way that the intended meaning is conveyed

· Both the question and the answer are subject to social inhibitions about what it is possible to have a conversation about: for instance, that the law on a subject may be an ass.

· When a question is answered definitively, that may allow other actors to subvert the intended consequences

“There is always a well-known solution to every human problem — neat, plausible, and wrong.” — H.L. Mencken

Or even:

“The unthinkable,” as Haitian anthropologist, Michel-Rolph Trouillot writes, “is that which one cannot conceive within the range of possible alternatives, that which perverts all answers because it defies the terms under which the questions are phrased.”

And the plonky IT version:

The whole point of advertising is not to make a set of rational arguments that would make you justify your buying, but rather just to say any old thing at all. If you repeat the name enough times, then you’ll automatically reflex and think of that object when the time comes to buy. So, you can have a program that has nothing to do with the object and it’s just as good as one that does, if not better. — David Bohm

The status of development artefacts

For all the reasons rehearsed above, when people create artefacts of various types within the development process, their status, their relationship to some “truth”, all can be moot. However, what we find is that under pressure of time and cost and ego, once artefacts are created they are treated reverentially. As David Bohm says above, some conventional truths, implemented as software, are just random and no worse for that in their own terms.

It is the selection of what gets created as an artefact in the development process that is really perverse, and which has the strangest effect on the eventual outcomes. The selection of what gets:

· Measured

· Tested

· Recorded

· Reported

comes about by a process that is not understood in terms of its sensitivity to other choices. The well-known example is measuring lines of code written,[2] which automatically makes the solution outcomes worse. If you want a really good solution, try minimising coding altogether.[3]

I don’t know if it is still the case but my understanding is that if you commit a driving offense in a State of the US, it will of course be recorded there on some database for future reference. And the databases in all the other States will eventually be updated to record that fact. But what if it is not a fact but an administrative error of some sort? If you go through whatever bureaucratic process you need to, the database in the State in which your offence took place will get corrected. But before other States can also update their records, they will themselves have updated the original State database to re-record the offence. It is impossible to correct the record.[4]

I have two daughters who work for CGP, who produce the revision guides most relied on by English school students. CGP have witty cartoons to help students remember boring stuff. You may recall the fall from grace of Jimmy Saville, who became a most unacceptable character to find in a school-related text. Well the company found a cartoon, during a revision process, that said “Jim will fix it”. So, a product that was very successful, fully tested, well-liked, becomes completely unacceptable, maybe even reputation-damaging, because of a change in circumstances. [5] In practice, checking the vast range of products for that, or a similar, gaffe was quite taxing.

Any of the artefacts produced in the development process, even if they were good and blameless originally, can become wrong and damaging in this way. And almost no-one is set up to keep all their documents current. Philip worked for an outfit that accidentally kept things up to date by always starting from scratch, but this was not a conscious choice and not done for that reason.[6]

Compounding the problem

For an IT professional a set of inter-related programmes is called a code base. I saw a quote the other day to the effect that if you are working on a code base and it is a joy to work with, the then set of human relationships that led to its creation, right back to its inception must all have been harmonious and productive. And of course, the converse is true; that without good relationships it is impossible to produce such a codebase and working with what you have produced will be a pain.

We can actually see this effect in other domains. A good law will have been carefully considered by a wide range of people and they will have joined cause, if only to make sure the end product was usable by the legal process. A knee-jerk piece of legislation never achieves its intended purpose, even if the intentions were great. Contracts of all sorts can also be really complicated in the way they cross-reference themselves internally. It is possible to have a clear and concise contract and it is possible to have one that has grown in such a way that no-one will ever understand its impacts.

This leads us to a distinction between complicated and complex. Complicated means that given enough time and patience it is possible to understand how something works, like a mechanism. Complex means the behaviour is not decomposable.

I would not give a fig for the simplicity this side of complexity, but I would give my life for the simplicity on the other side of complexity.

— Oliver Wendell Holmes Jr.

One of Philip’s public identities is Enterprise Architect.[7] EA’s build models of organisations to, allegedly, help architect and coordinate the way things work together. Which sort of implies that they believe organisations to be complicated when everything screams that they are actually complex.

In recent years, many enterprise architects have looked outside their traditional peer groups, in search of new approaches and new understanding. Even if you consider EA to be merely an IT discipline, the mechanical heritage of IT architects limits the effectiveness of their tools and approaches. EA’s ambitions to address bigger, more meaningful problems have outstripped their complicated-realm capabilities. (This is a positive step in the evolution of enterprise architecture.)

In Philip’s workshops, there’s always someone who clings to the mechanistic mental model, who has no appreciation of the other humans involved or impacted by the decision, who insists that the illegible spaghetti-like wiring diagram on their slides is both essential and meaningful to the audience. In a world that juggles pathos, ethos, and logos, they’ve gone ‘all in’ on logic, even after it has become clear that it’s a path to irrelevance. Somehow they’ve missed noticing other levels of logic.

Recursion of the above

The world is not flat, by which I mean not that you can’t sail off the edge, but that what we believe about the world that we experience is not all at one level. Just as there are key species in ecosystems, there are key beliefs in our world and when a key belief crumbles many, many things need to be reconsidered. It seems at present that almost the whole of modern medicine is built on a rather random misconception of human metabolism, putting all previous research results at risk.

My standard complaint about software is that it tends to build seriously crappy thinking into the mindsets of large numbers of people. A good bad example might be MS Project which cements a really defective approach to planning into being “normal”, or the way you do it. In David Bohm’s book, Thinking as a System, one of his principles is that a single thinking mistake can affect the whole of the rest of our thinking. I am only saying that some erroneous thoughts do that more obviously and completely than others.

So, everything we have discussed to this point jeopardises both the detailed work that is derived from what we are doing and the bigger picture that what we are doing is foundational for. Error spreads upwards and downwards. It takes excellent human relationships, like the colleagues in Bohmian dialogue,[8] and real work in order to keep things workable.

Since the thought mistakes, false assumptions, fantasies become things in themselves, their effects are unfortunately all too real. If we remember Scott’s Seeing Like a State and think about the destruction caused just by looking and measuring, we can see that this must be true. I think Michael Gove’s much-hated time as Secretary of State for Education are an excellent recent example. Rigorous exams destroy the remaining potential of the education system to educate kids, and the effect is invisible from the Department.[9]

Finally, the real is not necessarily coherent. It may even be necessarily not coherent. We could never know.[10] The way we observe the world means that it is often far from consistent and coherent, though some people say that reality is necessarily seamless and perfect in its own way. The underlying reality is in doubt and is differently experienced by different people. What then does this mean when interfacing a plonk IT system to the real world? An interesting proposition, in the sense of “may you live in interesting times”…

[1] German, apparently, for worldview. Welt = World, Anschauung = Perception. Could even be a whole philosophy of life…

[2] The existence of line-counting as a management metric makes me wonder whether any programming languages were specifically designed to maximise line count… I myself used to aim for negative daily numbers, which could be equally misguided.

[3] Think smart data, stupid code. The rise in declarative programming may have something to do with valuing this approach. There’s an adage from Kernighan and Pike, something along the lines of needing to be smarter than your code in order to debug it, so please don’t write your cleverest code in the first place…

[4] One place I worked managed to declare the same person dead on multiple occasions. We were an ‘authoritative source’ in the eyes of the credit agencies, so no sooner had he (once again) established himself as one of the living, than we told them (once again) that he was definitely dead. He eventually arrived in person and camped out in our reception (with a bodyguard) until we resolved the issue and compensated him for the destruction of his (and his business’s) credit and cash flow.

[5] It’s for similar reasons that Charlie Munger advocates choosing one’s heroes from amongst the ‘eminent dead’.

[6] There’s probably a level of architectural documentation that’s worth keeping up to date, but we usually choose the wrong granularity and either exhaust ourselves with the volume of changes required, or omit so much detail that the ‘reference architectures’ are unreferenceable in practice.

[7] For those who don’t know Philip, he was the EA Group Lead at Carphone Warehouse plc at a time when its operations spanned the globe. He’s still an industry advisor to IRM UK’s international EA conference. Now he spends much of his time helping deeply technical people expand their functional awareness of business & boards, of customers & context.

[8] Bohmian dialogue, in case you have never tried it, is the application of a group of people over the months and years to find the assumptions and mistakes that are holding them back. Liberating.

[9] I once listened to a senior civil servant from the ODPM explain passionately how because he lived in England he could tell exactly what his own children were achieving at school whereas if they lived across the border in Wales this would not be possible. I could have wept or thumped him.

[10] Indeed, one sign that something is ‘off’ is that an account of something is entirely too complete and consistent. Smiliarly, Scott Adams says a clear sign of cognitive dissonance is when two groups of people are exposed to the same facts but report wildly different stories, as though they were watching different movies.

--

--

Aidan Ward
GentlySerious

Smallholder rapidly learning about the way the world works