AI is transforming society. Here’s what we can do to make sure it prioritizes human needs.

Ian Moura
Human-Machine Collaboration
18 min readMar 27, 2020

Claims that humans will become dependent on an emerging technology long predate the development of artificial intelligence (AI). However, AI reliance and dependence differ from human use of other types of technology. Unlike most other technology, AI is probabilistic, using the likelihood of different outcomes to make predictions and classifications. It’s also automated, creating distance between the outputs or results of artificial intelligence and the humans who act on that information, and making it easy for people with less technical knowledge to view artificial intelligence almost as a kind of magic. Furthermore, much of the AI in use is opaque, either by design (as when the technology used is not comprehensible to humans) or due to protection by intellectual property law, meaning that even people who have the requisite technical understanding may not be able to explain how a given algorithm reaches its conclusions.

The ubiquity of artificial intelligence — coupled with the lack of clarity regarding where it is and is not in use, and the near impossibility of removing algorithms from one’s life — has ramifications to both individuals and to society as a whole. We discussed these issues at a meetup in February.

Putting AI in charge of tasks doesn’t necessarily make work easier for humans

When it is assumed that artificial intelligence will always provide accurate information and produce appropriate actions, the amount of attention humans dedicate to monitoring situations and context decreases. Unfortunately, this increases the risk that when things go wrong (as they inevitably sometimes will), humans are more likely to be surprised by it and less equipped to respond effectively.

The widespread implementation of AI also creates new responsibilities, as humans not only have to complete tasks or parts of tasks which are not yet automated, but must also manage the artificial intelligence. In certain types of work, this change is coupled with pressure to complete tasks at the pace of automation, rather than at a human pace, which can create both physical and psychological stress. Instead of freeing humans to pursue aspects of work for which artificial intelligence is not well suited, such as tasks that involve creativity or contextualizing information, workers may instead face constraints resulting from the expectation that they meet productivity targets based on the speed at which machines can complete tasks.

Managing AI may also be less fulfilling than doing tasks that automation takes over, leading to negative psychological and emotional health consequences for human workers. Work, after all, provides more than just a paycheck. For many people, it is also a source of personal satisfaction and even a piece of their identity, particularly in individualist cultures.

Relying on AI decreases people’s ability to complete tasks themselves, especially when the stakes are high

Beyond changing the nature of work, artificial intelligence can change how humans work, resulting in detraining. Consider complex tasks, such as driving. For most people who regularly drive, although it isn’t an automatic process, they drive often enough that they do not have to consciously think about every step involved in operating a vehicle (such as shifting gears, accelerating, braking, etc.). However, for new drivers, and people who drive only rarely, driving requires considerably more mental effort.

Now imagine a situation in which people rely on autonomous vehicles, rather than driving themselves. While autonomous vehicles might be safer than human drivers, there will be times when, inevitably, they fail — and in some of these situations, the result will be that control of the vehicle is surrendered to a human operator. In situations like this, humans are expected to step in only when automation fails. Consequently, not only do humans lose a certain degree of facility which they once had with the tasks now assigned to AI, but they also are expected to perform them only in much higher-stakes situations.

The effects of detraining are not limited to dramatic examples, like having to suddenly take over operation of a vehicle on an icy roadway when the AI running it fails. Simply automating more of the process by which humans obtain information may lead to a reduction in the skills required to evaluate and analyze novel information. Making it easy to share information, in the absence of any incentive to verify the accuracy of that information, does not necessarily create a system where critical thinking skills flourish. Arguably, when algorithms that prioritize “engagement” are involved, critical thinking may be disincentivized. In terms of cognitive states that keep people engaged on a platform, whether that means posting, commenting, or otherwise interacting with content, strong emotions like anger and fear are extremely effective. However, these same emotions make it difficult for people to stop and carefully evaluate new information, or adjust their worldview in light of changing evidence.

AI reliance doesn’t just impact human performance on physical tasks — it has implications for cognitive tasks, too

Even in the absence of social media and other platforms often implicated in the spread of misinformation, there are ways in which automation and artificial intelligence may impact human decision-making abilities.

On an individual level, collective (unjustified) faith in technology can lead to automation bias and its effects. Automation bias, or the human tendency to disregard or fail to seek out contradictory information when a computer-generated solution is accepted as correct, can be particularly hazardous in time-critical decision-making contexts, especially when there are many changing external constraints.

Since the Industrial Revolution, automation has led more to human displacement than replacement; new technology eliminates the need for some jobs, but requires more labor in other areas. Because of the discrepancy between the kinds of tasks in which humans excel, and the things which can currently be automated, increasing the degree to which technology performs mundane, repetitive tasks may also increase the proportion of critical decisions humans are expected to make.

Good decision-making requires cognitive effort — more so when those decisions have significant consequences. Increasing the number of high-stakes decisions humans have to make can lead to impaired decision-making as a result of “decision fatigue.” Decision fatigue is not unique to technological environments; there is already a body of research that documents the impact of poverty on decision-making, showing that making a greater number of decisions that involve complicated economic trade-offs leads to a depletion of cognitive resources (and, by extension, poorer decisions).

Growing reliance on artificial intelligence may extend this type of situation to more contexts, as mundane tasks are automated and a greater proportion of human decisions are of significant consequence. Ironically, designing automation with the expectation that humans will be involved only in cases of significant system failure can create situations where people are particularly ill-equipped to make good decisions under pressure — and thus, may not be able to adequately prevent the consequences of AI’s failures.

Smaller individual concerns can add up to big problems for society

Though the various effects of artificial intelligence are too often discussed as though they are discrete problems, they act in concert and have significant implications for society. The race to automate tasks seen as “routine” can lead to a devaluation of human work, and is in many ways an extension of the ways in which certain types of human labor are already dismissed as “unskilled.” When more complex tasks are automated, the very necessary role humans play in facilitating and enabling the automation’s success is seen as something that exists only as a stop-gap measure. Humans, the assumption goes, will only be doing this work until the technology is perfected.

Interestingly, society has been anticipating the point at which humans will be automated out of work since the Industrial Revolution, and it hasn’t happened yet. But viewing human obsolescence as an eventuality has enabled the progressive undoing of labor rights and worker protections — after all, if the people who do the work that can’t be automated are only doing it short-term, why protect their rights? The real threat, after all, is automation. The implicit argument seems to be that there’s no need to improve working conditions for workers whose jobs are doomed to disappear.

Discussions about “the future of work” — like many discussions about technology and its role in society — often stick to the passive voice. They obfuscate the role of human actors. Technological advancement is not, in fact, an evolutionary process; it is the consequence of human choices, but, clearly, some people have more choice (and more say) in how technology develops than others.

Inequitable power over technological advancement plays out in multiple ways. On a personal level, as ethical products — and human-made, “artisan”, products — become luxury goods, they also become less accessible to the majority of people. When purchasing power is construed as the primary way in which citizens have a say in corporate actions, those whose shopping choices are more significantly constrained or dictated by their budget don’t have the same ability to support businesses they feel are doing the right thing.

In other words, when “the market” is treated as the highest regulatory agent, people who cannot afford ethical options miss out. The emphasis on individual choice and collective consumer behavior, as opposed to external regulation of technology companies, leaves most people with little (if any) choice. Realistically, it is no longer possible for most people to opt out of the algorithms that determine so much of the content they see, and increasingly, contribute to decisions that may profoundly impact their lives.

Societal problems impact some people more than others

These people that are the least likely to have a say in the creation of algorithms, or to be able to opt out of them, are also the most likely to suffer far-reaching harms as a result of them. Algorithms replicate patterns of inequality that exist in society more broadly. They also tend to be focused downward; that is, there are plenty of companies working to create algorithms that can more accurately predict which individuals might default on their loans, but few (if any) writing code that will enable more accurate prediction of white-collar crime among financial executives. Under the current system, corporations take little or no responsibility for using AI to track people and then manipulate their behavior or sell the information gathered to advertisers. The people with the most to lose, who already face discrimination and adversity in their daily lives, are also the least able to remove themselves from this process.

As large technology corporations continue to buy up competitors, limiting the availability of more ethical options, framing avoidance of algorithmic decision-making as a matter of “choice” or “opting out” allows these same companies to continue to abdicate responsibility. When faced with scrutiny of their policies and practices, some companies have introduced a wider range of user controls, particularly related to privacy and data protection — but these options are almost always hidden behind multiple interfaces, forcing users to click through several screens. In many cases, programs ask for more data and more access to people’s lives than they really need — for instance, consider the number of mobile apps that do not need a user’s camera or location to function, yet require access to it anyway — and changing settings to enable a higher degree of privacy and individual control often comes at the cost of the technology’s functionality. Furthermore, the confidence and ability to adjust such settings requires a baseline level of tech savviness that many people do not possess.

Technological developments and changes in design exacerbate these issues

The creation of smartphones has led to dramatic changes not just in the kinds of technology people use, but also in the way that use manifests. Until comparatively recently, computers could perform a relatively small number of operations at a time. Opening one program often necessitated closing another, and limitations in processing power precluded the type of switching back and forth between multiple tasks in numerous software applications that is now standard practice for many people. Additionally, within living memory, computers have shrunk from room-sized to pocket-sized, and gone from being high-end research tools to possessions many people carry with them at all times.

Perhaps more importantly than changes in the size or power of computers, however, is the adoption of user-interface designs intended to be habit-forming. This approach has been widely credited to B.J. Fogg, founder of the Behavioral Design Lab at Stanford University, but it has roots in theories related to operant conditioning advanced by B.F. Skinner and other behavioral psychologists. Although there has been significantly more discussion and awareness regarding the drawbacks of “persuasive” design in the past several years, particularly when AI is involved, it does not yet appear to have led to a decrease in adoption or use.

While society’s purported addiction to technology has received a great deal of attention in the last several years, until very recently there has been significantly less focus on the reasons that companies have for designing products that are “sticky,” particularly when those products are offered free of charge. However, in the wake of repeated corporate data collection scandals, the general public has, rightly, become increasingly concerned about both what data companies are gathering, and for what purpose. Unfortunately, even for people who are committed to avoiding products that collect or rely on data about their identity, preferences, and behavior, actually opting out of algorithms can be difficult, and can often require opting out of products entirely.

Regulation could mitigate these issues, but it needs to be created first

Artificial intelligence is, at present, only subject to self-regulation, particularly in countries without strong consumer protections and data privacy laws. Without national and international guidelines from external, independent consortiums or agencies, there is not yet a process for vetting corporate claims that a given product relies on or uses AI, and there remains considerable discrepancy with regard to how different “AI-based” technologies actually work, given that techniques including linear regression, decision trees, random forests, and unsupervised machine learning (among others) are all described, at least in some instances, as “artificial intelligence.”

Even if there were robust consensus regarding whether or not a given technique does, legitimately, qualify as AI, companies are currently largely in the business of regulating themselves, meaning that they have free range to apply AI solutions to whatever problems they choose, without external limitations (or even discussion) of whether or not AI is really the appropriate tool to use. Without independent standards or oversight of the data used to train and test models, and no clear regulations regarding data use, there is little to prevent companies from replicating (and even amplifying) existing societal inequality with regard to factors such as race, gender, and disability.

The probabilistic, automated, opaque, and unregulated nature of artificial intelligence means it is harder to understand what it does and why it does it. It also makes it harder to predict failure, and difficult to examine and understand failures when they do occur. As AI is applied to more and more “high-stakes” scenarios — such as the use of algorithmic risk-assessment tools in healthcare settings, the reliance on recidivism algorithms in sentencing decisions, and the ongoing development of autonomous vehicles — documented cases of unanticipated, gravely consequential errors have also proliferated. However, these kinds of showy, catastrophic failures are really the tip of the iceberg in terms of the negative impacts that growing human reliance on artificial intelligence can have on society.

“Opting out” isn’t enough, because avoiding these technologies isn’t a realistic option

In a 2019 series of articles, journalist Kashmir Hill documented the effects on her life when she removed products made by the “Big Five” tech companies — Amazon, Apple, Facebook, Google, and Microsoft — from her life. Her articles demonstrate not just that eliminating all five companies from her life made many of her day-to-day tasks impossible, but that in many cases, removing even one company’s products caused significant disruption. While Hill’s experiment was particularly stringent — for example, she used a custom-built VPN to block not just Amazon’s consumer website, but also other websites that rely on Amazon Web Services (AWS) — her experience raises important points about just how entwined with society, and with individuals’ lives, these companies, and their algorithms, have become.

As Hill’s project demonstrates, even for people who make a conscious decision to limit the role of algorithms in their lives, it can be hard to avoid them. When alternatives to “unethical” tech products exist, they are often expensive, luxury items, which leads to a positioning of ethics themselves as a luxury, inhibiting structural change.

Even if people could avoid them, it can be hard to tell what is automated

There is, however, a bigger barrier to avoiding algorithms than their ubiquity and the lack of affordable, readily-available alternatives, and that is that all too often, things are automated without widespread awareness that the process has even occurred. For example, when Netflix serves up recommendations, few people may consider that the process by which they are presented with viewing options relies on AI. In fact, it is often only when things go wrong that the broader public becomes aware of the ways in which algorithms are shaping, and sometimes limiting, their choices. For example, it was only after repeated reports that YouTube’s algorithms were serving up disturbing and inflammatory content that many people became aware of the role AI (and data about their past behavior on the site) plays in determining which videos are recommended to them.

In general, people are not consistently aware of what is and is not automated, particularly as automation encompasses more advanced types of artificial intelligence. This can play out in two ways. In some cases, people assume artificial intelligence is being used when it is not. This situation is exacerbated by the general level of hype around AI and the application of AI to scenarios that do not require it, and compounded by cases where humans are required to make an automated process run smoothly (or work at all).

On the flip side, people are not necessarily aware of artificial intelligence when it is in use. Collective assumptions about what it means for something to be “artificial intelligence” contribute to this, as there is a tendency to dismiss certain advancements in computing (including optical character recognition, rule-based decision algorithms, and “brute force” methods) as something other than “real” AI. As early as 1970, Larry Tesler summarized this phenomenon, now sometimes referred to as Tesler’s Theorem or “The AI Effect”, with the adage “Artificial intelligence is whatever machines haven’t done yet.”

When AI is everywhere, it can feel impossible to do anything but accept it

The suggestions and decisions offered up by artificial intelligence are readily available and accessible, meaning that people may accept them not because they represent the best option, but simply because they are more convenient. Similarly, in the absence of meaningful alternatives, people may accept the status quo of flawed AI and automation, even if they might prefer a different paradigm, or more control over how and when AI is part of their lives.

The challenges and risks of widespread reliance on AI can feel overwhelming, and it can seem as though there is little anyone can do to slow the inexorable progression of technology (or automation’s continued encroachment into daily life). Fortunately, there are actions each person can, individually, take in response to the issues highlighted in this article. Perhaps more importantly, there are steps that people can collectively take, as a society, to ensure that artificial intelligence (and technology in general) serves human interests first and foremost.

Personal choices that everyone can make

On a personal level, everyone can be mindful of retaining skills for those times when automation fails. Stay vigilant; don’t assume that just because something is facilitated by artificial intelligence that it will operate flawlessly without human intervention.

Additionally, rather than accepting an algorithm’s prediction or classification as objective truth, consider the method by which it was obtained. What data were likely used to draw his conclusion? What data might be missing? What information, which is contextually important, might not have been considered at all? Computers excel at computational accuracy, but when algorithms are used to make decisions that impact humans — such as who should receive a loan, who should get hired, and who should be granted parole — context matters. Remember that any computational model is just that — a model — and that it does not necessarily reflect the nuance and complexity that exist in a given scenario.

Everyone can also demand the ability to opt out — both individually and collectively. Laws like GDPR and CCPA are flawed, but represent progress toward better regulation of data use and privacy. Support existing efforts to regulate privacy and data use, and encourage law makers to consider additional contexts that need regulatory legislation.

Furthermore, demand transparency. It’s important to know when data are being collected and how that data will be used, but that alone is not enough. Push for interpretable — not just explainable — models.

Support the work of researchers and organizations that democratize AI. Share visualizations, activities, and courses that explain algorithms in ways that people who do not code can understand, as well as programs designed to make careers in AI accessible to a more diverse group of people.

People who play a role in designed and implementing artificial intelligence can also take more direct actions

Conduct research to determine what problems need solving and how communities are already approaching them, before assuming that AI is the answer. Often, problems exist not because no one has ever tried to solve them before, but because they are complex and involve multiple interconnected issues.

Collaborate with the intended users of a product — and consider calling them something other than “users.” Remember that each person who uses your product is just that — a person, with a life and interests beyond their use of the technology in question.

Aim to design products that work with people, rather than intending for technology to be a substitute for humans. Collaboration extends beyond just research and testing; it should also be part of how technology operates.

Make the goals and tasks of user interfaces explicit, and consider designing in a way that forces humans and machines to work together to accomplish tasks — thus decreasing the risk of over-reliance, and of unfortunate surprises when the automation fails or runs into problems.

Create interfaces that support people in developing an appropriate level of trust in AI. Be explicit about when artificial intelligence should not, necessarily, be trusted. For example, show when the outputs from a model contain uncertainty, and how that uncertainty might manifest in a real-world context. Consider using visual features like opacity, size, and color to represent accuracy and reliability. Provide clear explanations of how algorithms were developed, how they arrive at conclusions, and what they might fail to take into account.

Additionally, both developers and companies should consider whether AI is really necessary before deciding to rely on it. Identify when the benefits of AI outweigh its downsides, and incorporate it only in those situations.

Use interpretable models. While certain types of opaque architecture, like deep neural networks, are touted for their accuracy, the same level of performance is frequently obtainable using transparent methods — particularly when appropriate care is taken in selecting, cleaning, and structuring data, and, if necessary, iterating models.

As a society, we can also take action collectively

Finally, it is critical to establish governance of artificial intelligence on a national and international level. This cannot just be in the form of self-regulation or public-relations focused AI committees. Rather, it is past time for the creation of independent oversight for the development and use of AI. Such a committee should establish and enforce norms on the creation and deployment of AI, and work to ensure that corporate incentives do not discourage responsible AI development and use.

Better oversight of the ways in which artificial intelligence and technology in general are present in daily life is long overdue. Although AI and automation serve important functions in numerous situations, it is unrealistic to expect each individual to regulate AI for themselves. Similarly, emerging governmental laws and policies to better establish standards for consumer data protection are a good start, but insufficient. While artificial intelligence has the potential to support humans and expand on their strengths, it can only do so if steps are taken to ensure that individual humans — and society as a whole — are not treated as subservient to technological advancement.

About the Human-Machine Collaboration Publication and the Berkeley AI Meetup

Preparing and equipping humans to work and live with machines is far easier when creating those machines involves thoughtful consideration of human abilities and human needs. Given our interest in these issues, Bob Stark and Ian Moura decided to create a discussion group for the purpose of research and problem-solving through the Berkeley AI meetup group. This Medium publication summarizes the background information that we cover in our meetings.

References and Recommended Reading

Amershi et al. (2019). Guidelines for Human-AI Interaction. https://www.microsoft.com/en-us/research/uploads/prod/2019/01/Guidelines-for-Human-AI-Interaction-camera-ready.pdf

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Aysolmaz, B., Dau, N., Iren, D. (2020). Preventing Algorithmic Bias in the Development of Algorithmic Decision-Making Systems: A Delphi Study. https://scholarspace.manoa.hawaii.edu/bitstream/10125/64390/1/0521.pdf

Bainbridge, L. (1983). Ironies of Automation. https://www.ise.ncsu.edu/wp-content/uploads/2017/02/Bainbridge_1983_Automatica.pdf

Barabas et al. (2020). Studying up: reorienting the study of algorithmic fairness around issues of power. https://dl.acm.org/doi/abs/10.1145/3351095.3372859

Benjamin, R. (2019). Race After Technology. https://www.amazon.com/Race-After-Technology-Abolitionist-Tools/dp/1509526404

Carabantes, M. (2019) Black-Box Artificial Intelligence: An Epistemological and Critical Analysis. https://sci-hub.tw/10.1007/s00146-019-00888-w

Cummings, M.L. (2004). Automation Bias in Intelligent Time Critical Decision Support Systems. https://web.archive.org/web/20141101113133/http://web.mit.edu/aeroastro/labs/halab/papers/CummingsAIAAbias.pdf

Eubanks, V. (2018). Automating Inequality. https://www.amazon.com/Automating-Inequality-High-Tech-Profile-Police/dp/1250074312

Gray, M.L. & Suri, S. (2019). Ghost Work. https://www.amazon.com/Ghost-Work-Silicon-Building-Underclass/dp/1328566242/ref=sr_1_1?keywords=ghost+work&qid=1580340772&sr=8-1,

Hollnagel, E., Woods, D.D. (2005). Joint cognitive systems: Foundations of cognitive systems engineering. https://www.amazon.com/Joint-Cognitive-Systems-Foundations-Engineering/dp/0849328217

Lee, J.D., See, K.A. (2004). Trust in Automation: Designing for Appropriate Reliance. https://pdfs.semanticscholar.org/8525/ef5506ece5b7763e97bfba8d8338043ed81c.pdf

Lyell, D. & Coiera, E. (2016). Automation Bias and Verification Complexity: A Systematic Review. https://academic.oup.com/jamia/article/24/2/423/2631492

Mani, A., Mullainathan, S., Shafir, E., Zhao, J. (2013). Poverty Impedes Cognitive Function. https://pdfs.semanticscholar.org/a4af/e0a27f860ffac573c0f2ae9f9a3b8c9ad456.pdf

Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. https://www.nature.com/articles/s42256-019-0114-4.pdf

Nushi et al. (2020). How to build effective human-AI interaction: Considerations for machine learning and software engineering. https://www.microsoft.com/en-us/research/project/guidelines-for-human-ai-interaction/articles/how-to-build-effective-human-ai-interaction-considerations-for-machine-learning-and-software-engineering/

Rich, A.S., Gureckis, T.M. (2019). Lessons for artificial intelligence from the study of natural stupidity. https://sci-hub.tw/10.1038/s42256-019-0038-z

Rudin, C. (2019). Stop Explaining Black Box Machine Learning Models For High Stakes Decisions and Use Interpretable Models Instead. https://www.nature.com/articles/s42256-019-0048-x.pdf

Rudin, C., Radin, J. (2019). Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition.
https://hdsr.mitpress.mit.edu/pub/f9kuryi8

Sarter, N.B., Woods, D. D., Billings, C.E. (1997). Automation Surprises. https://pdfs.semanticscholar.org/f4c7/caebecd0f1b42d1eb8da1061e464fcccae11.pdf

Spears, D. (2010). Economic Decision-making in Poverty Depletes Behavioral Control. https://www.princeton.edu/ceps/workingpapers/213spears.pdf

Thomas, R., Uminsky, D. (2020). Reliance on Metrics is a Fundamental Challenge for AI. https://arxiv.org/ftp/arxiv/papers/2002/2002.08512.pdf

Wieringa, M. (2020). What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability. https://dl.acm.org/doi/pdf/10.1145/3351095.3372833?download=true

--

--