August Macke. Fashion Shop.

The Role of Artificial Intelligences

Joop Ringelberg
8 min readSep 21, 2019

--

Why Perspectives Is The Modelling Language of Choice For AI In IT

In our time and age we witness the introduction of Artificial Intelligences in society. Intuitively we perceive AI’s as fundamentally different from ordinary automatisation. Yet, they come to us through the work of the IT sector. We might therefore expect that an equally fundamental change moves that sector. But it seems otherwise: as it has done before, the industry tries to absorb the new thing gradually. We have, however, good reason to expect that this time a paradigm shift is called for. In this article I focus on one facet of that shift, to wit the method the industry uses to design software by. I will show how the de facto standard, the Unified Modelling Language, falls short of todays requirements. I will also show how Perspectives fits the bill perfectly.

In short, I put forward that Perspectives is a method to design software systems that support people in their co-operation, including co-operation with Artificial Intelligences — a brilliant insight due to Cor Baars.

Artificial Intelligences continue to amaze us with their capabilities. They perform actions that we had until then considered exclusive human behaviour. But such amazement fades away rather quickly. I am old enough to remember the sense of awe that struck us as we held a small machine in our hands that could carry out complex calculations such as to raise a number to a power, or, inversely, to take its root. Nowadays we no longer consider such calculations to be a hallmark of intelligence. We are all too aware of the mechanisms behind them.

A tongue-in-cheek definition of AI is: automation of things we do not yet truly understand. But that would mean that what is AI in one age, is no longer AI at a later time. Just as we get used to yesterday’s miracles, year after year researchers discover more algorithms that lead machines to perform new miracles. This opens up the Bald Man Paradox for AI, which goes like this: a bald man has no hair. A man with a single hair is still bald. So adding a single hair makes no difference. Yet, if we keep on adding hairs, at a certain point our man surely can no longer be called bald! This might be dangerous with regard to AI. By way of another metaphor for gradualism, like the fabled frog, we might find ourselves cooked without ever noticing (1).

Even though AI’s miracles get all the attention, they then cannot be its hallmark. So let me draw your attention to another concept: agency. Agency will prove to be the key to a new understanding of what software (including AIs!) really is.

Intuitively, human beings have agency. The very act of speaking of agents — and not of humans — is to entertain the possibility that other entities could have agency, too. An agent reveals itself by its actions (2). As an example, consider my mother, who used to wake me so I would go to school. Her action reflected agency, no doubt about it! There was intention behind it and purpose. As it happened, at a certain moment she gave me an alarm clock. So from then on, I was awakened by the alarm clock instead of my mother.

Did the clock have agency?

Certainly I was mad at it, at times, conferring on it a misguided purpose and merciless behaviour — just as if it was a human being (3). But on the other hand, its mechanism was so simple and so transparant to me, that I have never considered it to be intelligent.

Intelligence is not the same as agency.

I put forward the notion that my mother conferred some of her agency on the alarm clock. Instead of waking me up manually, as it were, she deployed a mechanism that would have the same effect. She still was the source of the intention behind the action, but she delegated the execution to a machine (4).

Delegating agency is a very powerful notion. It is strongly related to the concept of discretion, in the sense of the authority to confer a judgement. Imagine the employee at entrance to the theatre who will take your ticket and admit you to the show. Now picture yourself coming at that door without a ticket, pleading with the doorman to let you in. He will say something to you to the effect that he cannot do that. Is he right? Superficially, if we just look at the situation of you two there at that door, ignoring context, obviously he can let you in. He could just step aside! But he feels himself bound by the rules of the organisation that employs him. These rules include strict guidelines on admission of guests. Moreover, even while you, as a visitor, may not be bound by these same rules, you are bound by society’s rules that regulate entrance to private buildings.

The doorman does not have the authority to admit persons on his own judgement. It is not at his discretion. His actions, in his role as doorman, are limited. But roles can bring an extension of powers, too! Just think of a business CEO with the discretionary power to build a new factory in a town — or not to do so. Such decisions are not for everyone.

We see, then, that organisations are built in terms of roles dressed with specific discretionary powers. With a small but significant change of vocabulary I propose to call them roles with expanded and / or diminished agency. In short, a role has delimited agency. The terms in which we set these limits are actions.

Let me bring up two more forms of delegation of agency, to prepare ourselves for its full treatment in relation to Perspectives. Our legal systems provide the notion of power of attorney. This is the authority to act for another person in specified or all legal or financial matters. So an attorney can represent another natural person and may take the decisions and actions that this other person is authorised to take.

The second form of delegation I’d like to bring up is that of representing a legal person in the sense of a corporation. Any employee dealing with someone who is not an employee of the same legal person, represents this legal person — but obviously just to a certain extent. Just to what extent is covered by the definition of his role, defined in terms of actions (as explained above). This is why it is important for an organisation to define its roles rigorously, because otherwise it could be held liable for any action performed by an employee. By showing how such an employee transgressed the limits of his role, the legal person protects itself and brings the liability for the employee’s deeds squarely down on his own head.

Perspectives are the actions of agent roles in a context. This is the most concise definition of the software design method ‘Perspectives’. Here we see the principle of delegating agency at work. Who delegates what to whom? That depends on the context. By definition, if one participates in a context, one underwrites the role system that defines that context. This may be as simple as an evening out with friends, where one of them took it on her to make a restaurant reservation. If she did not, her friends may rightly reproach her for that; but on the other hand, they cannot complain to her if the restaurant is not to their liking.

But the context may be a large corporation with hundreds of roles and tens of sub-contexts. Here, in the end, the legal person delegates to its employees — while at the same time being represented by one of its employees, who in turn will have delegated a lot of his agency to sub-employees, etcetera.

The point is that with Perspectives we can model both the simple and the intricate structure. Structuring delegation is implicit in building a Perspectives model, actual delegation takes place as natural persons accept roles.

Let’s return to Artificial Intelligence. Consider a thrombosis service. Patients have their blood sampled and analysed by a doctor. The doctor decides just how many tablets of coumarin each patient should ingest daily in order to maximally and safely lower the risk of blood clots. One easily discerns the various roles for a patient, nurses taking blood samples, the doctor, the pharmacist in a wider context, etc. With a Perspectives model of these contexts, software supporting these professionals in their jobs can be generated automatically.

Deciding on the right dosage is manual work, requiring great experience and knowledge. Until recently, that is. Dr. M. Beinema discovered a simple and elegant algorithm that computes a dosage based on just a few measurements in most cases and is capable of rejecting cases it cannot safely cover (5). By incorporating this algorithm in the Perspectives program as an action of a bot, we can easily improve the software to the point where the doctor just has to handle the cases rejected by the algorithm. Doing so would mean that some of the agency of the doctor’s role has been delegated to the bot. The bot carries out the analysis, but the intention and responsibility remains with the doctor. Does the doctor agree? He will signal this by continuing to play his role.

We believe this is the way we should incorporate AI into information systems. It does not matter how complex or how clever the software program is that performs a task. Is Beinema’s algorithm AI? Maybe. It is artificial and it took a lot of intelligence. But the more important question is not about the nature of the program, but about its role in society. We believe that decisions and other actions affecting patients, clients, citizens, etcetera should be traceable in the web of roles that is the fabric of society. Perspectives offers a very precise and effective method to construct that traceability.

Finally, UML. The Unified Modelling Language can be used to design the structure of a computer program. It’s models can provide a high level overview and insight where actual program code is too detailed. But as such, UML models are about the nature of the program — about how it is structured, how it functions. It has nothing to say about agency or delegation. It is concerned primarily with the thing, a little with its function but not with making clear how this thing relates to human agents or otherwise. As a consequence, agency is delegated without thinking.

Yet, UML continues to be the de facto standard for the software industry. This means that the engineers and modellers that gradually change our society by bringing evermore sophisticated software into its crooks and nannies, lack the conceptual tools to think clearly about traceability and responsibility. We transfer more and more agency to machines with no method to its possible madness. Already we see many signs of the ill effects this has; we hear of distressing cases of people deprived of access, services or even goods that to all purpose and intent should be theirs — while there seems to be no escape, no one who can step in and correct the wrongdoings. Not because that would be impossible — but because it was not designed to be possible.

(1) I will explain in the last paragraphs what ‘being cooked’ means, in relation to AI.

(2) As a definition, this is entirely circular.

(3) Although I’ve never hit my poor mum on her head, like I hit that clock!

(4) I paint a rather spineless picture of my younger self here, dear reader, all for the benefit of the discourse! You will have surmised that in time, I set the alarm clock myself, becoming the agent behind it. I did take control, in the end.

(5) Beinema, MJ, Meer, FJM, Brouwers, JRBJ, Rosendaal, FR. Optimization of vitamin K antagonist drug dose finding by replacement of the international normalized ratio by a bidirectional factor: validation of a new algorithm. J Thromb Haemost 2016; 14: 479– 84

This is the eighth column in a series. The previous one was: To A Bridge In Amsterdam. Here is the series introduction.

--

--