Following up from a previous post, I’ve been thinking and reading about automation and especially RPAs ever since I read this tweet (thread). For more context, I’d also encourage you to read what Jamie has to say about it.
As more businesses and organizations adopt the robot/remote process automation routines, replacing humans with programs and algorithms, it brings about a degree of inequality that is difficult to comprehend unless you perceive it sharply enough to be absolutely appalled by it. Algorithm driven systems now touch a large part of our day-to-day. For instance, micro/small ticket loans; ticket pricing for travel and stay; insurance claims handling; offers and discounts made available at businesses; access to services; access to credit; access to medical benefits; access to transport/cabs are just a few among other things.
The Seductive Business Logic of Algorithms | Data Driven Investor
Certain machine behaviors never cease to amaze me. I'm astounded by their ability to learn from their accomplishments…
The marketing term for this efficiency that we are told to welcome in our lives is “AI-powered” or, “AI-driven”. And in effect, what we are being asked to feel comfortable with is this black box system which ingests certain specific facets of data and then provides a “solution” or, a response on which certain subsequent events will happen. Regardless of the level of education or, ability to understand the transaction flow, this opaque system of information processing is capable of causing a significant bit of disorientation and helplessness. Automated business systems also encourage efficiency by removing all possible interactions with a human. In contrast with previous generation systems where one could reach out to a human to seek explanation and/or understand the procedural internals, AI systems (and, chatbots) enable a seemingly precise set of question/responses which are baked in and may not encompass all the diverse set of problems.
There’s that other part we tend to overlook. And that is the over-loading of everything with “AI” or, “artificial intelligence”. Seemingly, this helps the business appear more modern, efficient and likely convinces the investors of the ability to grow and scale uninterrupted by the bounds of needing to hire more personnel. Stripping away the jargon, what we really have are “learning systems”. And like all search systems, to be able to create solutions which are aligned with goals pre-determined by the systems designers, there is a need for structured and labelled data. This is the other part of this story — of work that is farmed out to countless humans as atomic tasks and this leads to a well prepared dataset on which systems design can begin to work.
The fundamental problem with these new systems is the absence of a redressal system that is hand-waved away as “Oh! It is the system”. Anupam Guha makes a valid point against this habit of putting moral responsibility to inanimate systems and in a way, pushing the consumers of such services up against an uncaring wall. The opaque design of the systems also cause first level support to be less focused on actually understanding the problem and more invested in repeating templates of messages generated as a result of the transaction gone awry.
We fail to grasp the serious degree of inequity which would be created as more consumers who are unable to navigate a process automation system eg. IVRs or, chatbots attempting to responding to process automation failures and are thus stuck in a limbo. The movement for more structured and explainable AI is well placed to link the moral responsibility of a business (and corporate) to the technological responsibility encapsulated within the design of the system. While more nations adopt the general thrust of the GDPR, there are elements such as explainable systems which also need to be mandated through regulatory processes. Else, the inadvertent cruelty of algorithms will become both normalized and a tool for oppression on a scale that we are yet to comprehend.