Control, trust and accountability in a world of machines…
Part 2 — Who is accountable?
This post is the second in a series of four. It finds its origin in #AutoProcure, the exciting initiative that Kelly Barner, from Buyers Meeting Point, and Rosslyn Analytics, recently launched to create “an active discussion around procurement’s relationship with automation: both what it is and what it ought to be.”
In the first post in this series, I addressed the question of “who is in control” by looking at how computers transformed the job of airplane pilots:
I mentioned the differences in design philosophy of the chain of command and authority priority between Boeing and Airbus:
- Boeing trusts more the pilots
- Airbus trusts more the computer
I also highlighted that both approaches have avoided the occurrence of accidents, and, at the same time, contributed to some. None of the two options has a clear and definitive advantage. Pilots themselves do not have a clear preference:
“Boeing or Airbus? Among pilots, this has become almost a question of faith. “That’s just as hard to decide as the question of whether Mercedes or BMW is better,” says pilot representative Braun.” The Computer vs. the Captain, Will Increasing Automation Make Jets Less Safe? — Part 3
To explore, further the question of accountability, let’s look at another transportation means where technology is slowly but steadily taking over humans: self-driving cars.
By comparison with airplanes, self-driving cars will face much more frequently complicated questions. Morale ones. Due to the nature of traffic conditions, situations requiring to decide “who to kill” will happen more often on the road than in the air.
The logical and rational approach to answering this moral question seems obvious. The truth is more complex. From a scenario point of view, some situations require the analysis of many parameters. And, sometimes there isn’t a good decision. Give it a try; the MIT has a website for that!
Things get even more complicated when you look at that same moral question when buying a self-driving car:
“[M]ost people agree that from a moral standpoint, cars should save the maximum number of people even if they must kill their passengers to do so.[…] When given the option of hypothetically buying a self-driving car that’s utilitarian (it saves the greatest number of people) or one that’s selfish (programmed to save its passenger at all costs) people are quick to buy the selfish option.” Source: The Self-Driving Dilemma: Should Your Car Kill You To Save Others?
Dilemma, morale… also applies to Procurement decisions.
Not in the same manner as for self-driving cars, but still. Sustainability is an important component of the Procurement value proposition (I will cover these aspects in greater details in part 4). Terms like CSR, ESG, and circular economy all illustrate that. And there are many other trade-offs that impact decisions.
The more machines will take over decision-making processes, the more they will be in situations where a decision is not just a simple choice between A and B; like self-driving cars. And, like us, they will have to have to answer for their acts. So, when a decision leads to problems, issues, or accidents, who will be held for responsible?
With regards to Procurement solutions, some of the possibilities:
- The user (he can be from Procurement or any function and even external to the company),
- The Procurement organization that selected the solution and, in some cases, that the solution represents,
- The solution provider that designed the solution
Once (and if) responsibilities are defined, the next important question is: how to ensure it will not happen again?
To answer that question and the one on responsibility / accountability question, an important consideration has to do with the nature of the machine. Was it programmed? Did it learn from us?
This is what part 3 of this series will cover…
If you enjoyed this, please scroll down and click the “recommend” or “share button”.
If you have your own “perspectives”, just use the “response” feature.