Breaking the Competent/Disempowered User dichotomy.

Will Pearson
4 min readMar 6, 2018

--

Introduction

One of the goals of off-piste architecture is to reduce what I think of as the Architecture/ User friction. This friction is caused by a difference between what the architecture expects of the user of a computer system and what the user is capable of. There are two different views that we have currently. The first and oldest is that the user is competent at administering their computer. The latecomer view is that the users are not competent to manage their computer and must be prevented from having any impact on how the lowest levels of their system operates, for their own protection.

Neither view is right. This causes friction between the user and the computer. Either the user makes changes they shouldn’t do or they cannot have any influence into the system as a whole even when it safe to make a change.

The computer architecture is not what you normally change when you are trying to make computers easier to use. You normally change higher level components, for example by improving the layout of the user interface or by giving context sensitive help. You don’t go monkeying around with the architecture and operating system.

But if we want to make computing significantly better for people we should change as much as we need to. The deeper the change to how we think about computing, the bigger the potential pay off. The greater the cost as well.

However this friction is being woven into the foundations of our society as we embed computers into more and more things. It makes sense to get this right and get it right now. It will only get harder to change.

PEBUAI

There is an unkind phrase in IT, PEBKAC, it stands for problem exists between keyboard and computer. It is used to indicate a user error, the user not understanding the system enough to use it properly.

However with more and more computers being deployed around us and the software we are using becoming ever more complicated, we cannot be expected to be the expert users of all of our all of their systems.

Is it a PEBKAC when you don’t patch your system (or do patch your system with a bad patch), when you don’t install your antivirus or when you are tricked by a very good phishing website? Should these things actually be a prerequisite for interacting with the internet? At some point we should no longer blame the relatively limited humans but instead put the blame the ever more computationally powerful computers. The problem exists between the user and the internet. That is the problem is in the hardware and software that make up all of our machines, it does not meet our needs.

One possible way of mitigating some of these problems is offloading the maintenance of our systems to the people we buy the systems from. However we also cannot rely on manufacturers maintaining security or supporting servers. If the maintenance of our internet enabled device is no longer a profitable business, it won’t be maintained.

Humans and computers have not always had such friction, at the birth of modern computing things were relatively harmonious. However the model of interaction that was settled on then has set us down our current path.

Dawn of modern computing

In the era of punch cards the admin/computer relationship made sense. Expert programmers had simple programs that they understood, that they wanted to run. There were no viruses, no complex software stacks and no other operating system to interact with. Things were pretty simple back then, pre-time sharing and file storage, there was not much you could muck up. The user could emulate what was going on in the whole system in their head, just a lot slower. Users were in control.

We, to this day, still expect users to be either this competent or thoroughly incompetent to the point where they have no influence into the low level operation of the system. It is a binary decision.

One example of this competent/disempowered dichotomy is in how a computer first boots up.

Giving it the Boot

What happens when you boot a computer is that after the BIOS runs, in simple terms the BIOS picks a defined point on your storage to load a program from. That program is the boot loader.

The boot loader’s heavy responsibility is to load the rest of the operating system. It could chose to infect every program it loads with malware or just load plain malware.

This boot loading step happens in computers, phones, thermostats etc. Do users have the required skill or knowledge to be able to maintain all those things?

One alternative, introduced by UEFI, is to have code signing all the way to the operating system. But that means the user cannot change things at all, unless it is by loading code signed by someone else or them figuring out how to sign things yourself.

The user may want change things about the boot loader and or the operating system it loads. Perhaps you want to speed it up or remove a feature that has been added.

But if you are given control you do not have the knowledge to use it wisely.

Breaking the Dichotomy

So what is the alternative? Rather than having a single point of failure or control, our computer architectures could automatically load multiple redundant programs that interact to form a functional computer.

The user could then load a new component and if it failed or proved to be malicious, another good component could substitute for it. No component would have overall control, there would be no critical lynch pin to fail.

I think that Agoric computing has a place in this view of computing, bit it might not. If we want to get better computers, people working in technology need to be questioning how our systems work from the ground up. Off-piste architecture is the beginning of that journey.

Without going on that journey we will be embedding two wrong views of users in the computational substrate of our civilisation.

--

--