Assisted, not Automated: Striking a Balance in Decision-Making with AI

Sandro Mikautadze
Predict
Published in
7 min readMay 28, 2023
A scale
Credits: Author via DALL-E

From Tesla’s self-driving cars to Boston Dynamics’ agile robots, from facial detection to image generators, from social media suggestions to map navigation systems, the impact that artificial intelligence (AI) has had in our lives has steadily increased in the last few years. Just like any other disruptive innovation, sooner or later AI will reach a connection with any imaginable aspect of our lives. One of these has been emerging lately and comes under the name of Automated Decision Making (ADS). ADS refers to the use of algorithms or other automated processes to make decisions without human intervention. In other words, ADS involves using technology to analyze data and arrive at a decision, without a person having to manually review the information themselves to take the decision.

The Growing Influence of Algorithms in Decision-Making

If this sounds futuristic and fictional, it is. One of the earliest examples of automated decision-making can be found in Isaac Asimov’s “I, Robot”, published back in 1950, in which many of the characters are robots and they are responsible for the decision-making of mankind. Today, there are countless many examples of why we are seeing this fiction become a reality. One basic and easy-to-understand example is credit scoring. Indeed, usually, lenders need to assess a borrower’s creditworthiness and decide whether to approve or deny their loan application; hence the decision maker is the lender and must choose whether to give the money or not. Automation comes into play here since an algorithm considers a variety of factors, including the borrower’s credit history, income, and debt-to-income ratio to reach a final decision. In a fast environment such as that of banks, automation allows lenders to quickly decide without being confused about the choice, leading to more efficient and consistent decisions. Another example is job recruitment where, in the first screening phases, algorithms consider education, work experience, skills, and other proxies to rank candidates based on their suitability for the role, which surely saves time and resources.

Other examples exist as well, yet what I believe is more worth mentioning is our common reaction to these facts: it feels strange to have decisions taken by an algorithm without human intervention. We are used to making decisions based on personal (and human) intuitions and judgments, which can consider several factors that are rarely mathematically quantifiable. In addition, the algorithms’ accuracy tends to be not perfect and is destined to make errors by construction. Thus, the following questions arise: is it fair and ethical to entrust automated decision-making to determine our fate? Are these decisions correct? What does “correct” even mean for a machine: is it risk minimization, or social acceptance maximization? What makes them more reliable than humans to fully base a decision on AI? These and others are critical questions that need to be addressed to fully comprehend the impact that ADS will have on society, in this era of unprecedented advancements.

In particular, the central vision that I believe in, at this level of technological progress, is clear: ADS should not stand for automated decision-making, but rather for assisted decision-making. While AI can be a powerful tool to support and enhance human judgment, we need to recognize that it cannot replace it entirely, and relying solely on algorithmic decisions can be extremely limiting for various reasons.

Unveiling Automated Decision Making

First of all, AI algorithms are only as good as the data they are fed. This means that if the data is biased, then the algorithm will be too, so using ADS, without having a clear picture of the information that the system is given, may lead to bad consequences. A clear example of what I think of comes from Redlining, a common practice adopted by American banks in the 20s. Bear in mind that even though it is not AI, the techniques used were very statistical and automated. The task was similar to that of the credit example quoted initially, namely: if you are a bank and you want to give a house loan or some insurance for houses, how likely is it that the person will pay and not get into default? What they did was analyze how the credit in a specific area was, so lower scores were assigned to areas that tended to not repay the debts. But as one might easily infer, inner-city neighborhoods, where lots of Afro-Americans lived, had a fewer score than rich neighborhoods; hence, in the long run, districts with a positive score became richer, more educated, and wealthier because they were given the money, while poor neighborhoods were impoverished even more. This shows how an initial bias over the data propagated significantly over the years because the decisions were only based on the output of data, without properly intervening with human oversight whenever needed. Similar cases can be applied to other examples, like crime prevention, in which data on crime patterns, demographics, and high-risk areas are gathered to allocate resources (like police patrols) more effectively. In fact, recently, an ADS algorithm used to predict recidivism rates has been found to be inaccurate and biased, leading to unjustified sentencing decisions. Hence, in a world where data is extremely messy and unclean, relying solely on ADS may lead to unwanted biases and unaccounted results.

Secondly, the lack of accountability increases the risk of trusting AI. Most of the decision-making tasks are so complex that black-box algorithms are used for them, meaning that even though they might make the choice, there is missing transparency in how the final decision was made, making it harder to trust the system. In addition, the variables that the algorithms might consider to output a decision may be unknown to humans. For example, a trading algorithm may make decisions based on market data that humans are not aware of, leading to unexpected market behavior. As this was not enough, when the mistake the algorithm makes is so big that the consequences become serious, current AI regulations do not provide guidance on the responsibility, making it harder to hold individuals or organizations accountable for their actions.

However, all of this does not imply that automated decision-making should be considered unethical a priori. Indeed, whenever a task is repeatable, can be made more efficient, and has an objectively correct choice, then ADS can be the right way to go. For example, consider passport scanning. By doing face recognition, airports can efficiently detect faces to match them with passport pictures and allow the passengers to avoid wasting time in boring procedures. Here the task is repeatable for sure and has a true answer (i.e., the face), so ADS is surely advised. Another example is in agriculture, where ADS can be used to optimize crop yields and reduce waste. Here the decision is clear, repeatable, and unique, and the farmer just must follow what the algorithm says, as to have the best results, because everything is well-defined for the algorithm to interpret. But when a decision becomes non-unique, that is when the A should stand for assisted, and not automated. To give an extreme example, self-driving cars belong to this category. Yes, they would reduce incidents on the street, improve overall driving quality, etc. as everyone says, yet, very simply put, the car’s decision-making should never be fully put in the hands of an algorithm but rather in the hands of the driver only. There are some ethical scenarios that we cannot fully agree on ourselves (MIT’s Moral Machine is an example of this), let alone a car that is supposed to be trained on human data and do the decision for us!

Balancing Ethics and Efficiency

In general, current ADS lacks the ability to understand complex human emotions and social contexts. This can be particularly problematic in cases where decisions are subjective, and intuition and empathy are essential. Again, the Moral Machine serves as a great example, but also the initial criminal justice instance can be useful to explain this. When the jury has to sentence a person, many factors like the offender’s personal history, mental health, struggles, etc. may fail to be understood by algorithms and should be judged upon human empathy and intuition. Still, it cannot be denied that when a low-risk and repeatable task can be automated, most people can benefit from it. This is why ADS should mean assisted decision-making and not automated decision-making. The stream of AI progress that we are undergoing right now is beneficial in the realm of decision-making, both mathematically and practically, as previous examples illustrate, but over-relying on those instruments will lead to a lack of independent judgment that we have seen to be fundamental in the decision process. Automation should function as a supporting aid to decision-making, rather than a replacement that fully automates it.

Finally, ADS has become an increasingly common reality in several fields, some of which were briefly aforementioned. While artificial intelligence can be an extremely powerful tool to support and enhance human judgment, it should not replace it entirely. For this reason, a more appropriate meaning for ADS should be assisted decision-making, not automated, where AI assists humans in their decision processes, still leaving the final choice to mankind. This is crucial given the potential biases that algorithms can inherit from the data they are given and problems with fairness, accuracy, and reliability that can limit trust in these systems. Thus, ensuring ethical and social values in the use of ADS is essential to avoid perpetuating social injustices. To conclude, as AI advances it becomes increasingly critical to address these ethical and moral challenges to strike a balance in the responsible usage of ADS, with humans at its core, not algorithms.

--

--