The future of AI: human judgement, models transparency and dedicated governance for better decisions, job creation and reduced bias

Maximiliano Santinelli
GAMMA — Part of BCG X
7 min readAug 26, 2019

--

Perspectives from Academy of Management 2019

Interest in artificial intelligence is gaining momentum among management scholars, reflecting the growing amount of AI business applications entering the market. The recent 2019 edition of the Academy of Management (AOM) presented attendees with the most current research performed by management scholars on the business impact of Artificial Intelligence.

Particularly large and engaged audiences attended the many AOM 2019 sessions focused specifically on AI. These sessions analyzed AI’s impact on three main areas of interest:

  1. Organizations: The opportunities and threats of AI applications in organizational decision-making.
  2. Research: The use of AI to identify patterns that help define and validate theories, and the use of neural networks as representation of agents and their interactions within an organization.
  3. Teaching: How to integrate AI in business and management academic curricula to make sure future managers have the appropriate skills (technical, ethical, etc.) to drive AI adoption and use.

In this article we focus on the current status and findings of management research on the impact of AI on both organizations and the people they employ.

AI impact on organizations, jobs and employment

AI solutions are evolving from sophisticated applications able to play Jeopardy!, chess, and other games to autonomous agents that support and drive decisions. Thanks to AI, organizations are able to generate accurate, large-scale predictions through the analysis of an unprecedented amount of data processed at the transactional level. This leads to better decisions than those possible when using traditional methods based on descriptive indicators and statistical analyses of aggregate data. Academic research has begun in earnest to analyze the opportunities and threats of this ongoing evolution.

Consensus is emerging that Artificial Intelligence can be considered a “general purpose technology” (GPT). This is an important recognition of the potential impact of AI. GPTs are technologies that can result in the creation of multiple products and services, with long-lasting impact on various areas of society and the economy. (Electricity and information technology are just two examples of GPTs that have emerged in the past.) Many scholars argue that for the foreseeable future AI will reinforce a trend already observed with IT-driven automation and digitization.

Consider, as a comparable case, the impact on bank tellers and bank branches caused by the introduction of ATMs in the 1970s and 1980s. Rather than completely replacing bank tellers, ATMs reduced the costs of operating a branch and enabled banks to open more branches with reduced footprints. Although each branch had a smaller headcount, the net effect was the opening of more branches and, hence, a growing demand for bank tellers. At the same time, bank tellers’ core skills have evolved. With some of the more mundane tasks performed by ATMs, bank tellers were able to develop skills complementary to functionalities offered by ATMs. Rather than simply handling cash deposits and withdrawals, tellers acquired the soft skills required to manage customer relations and the technical skills needed to handle more complex transactions.

Nor will AI entirely replace humans in the execution of many other jobs. As was observed with bank tellers, there is a growing number of studies showing that AI has a positive impact on worker productivity and compensation through the “augmentation” of human capabilities and judgement. (The actual impact on occupational levels will depend on the individual worker’s ability to develop complementary skills not provided by the new technology. [1])

Another observed AI trend is “job hybridization,” according to which either traditional jobs are enhanced by new skills or new types of jobs are created as a combination of emerging skills. Examples of both types of hybridization can be analyzed through job data provided by Burning Glass.

The “job enhancement” type of hybridization is exemplified by the evolving role of “marketing manager.” The traditional profile for this position includes an average annual compensation of $76K per year and a period of 28 days to find the right candidate. A marketing manager with SQL skills increases the value of the position, raising the average annual compensation to $101K per year and extending the placement period to 45–50 days. [2]

Job hybridization itself has led to the creation of new positions such as “machine learning engineer.” This professional profile draws skills from data science (Machine Learning, Deep Learning, TensorFlow, etc.) and computer science (Software Development, Java, C++), with the goal of driving the implementation of AI solutions at scale. The demand for this job increased by 487% between the first semester of 2016 and the first semester of 2019. The demand for Data Scientists, though still high, increased by a comparatively low 123% over that same period. [3]

Ethical implications of AI-driven decisions: Bias and discrimination

On the other hand, the intrinsic complexity and non-linearity of most AI models makes them opaque with respect to the drivers and rationale of individual predictions and recommendations. This opacity raises concerns that, without appropriate controls and critical thinking, AI solutions could have a downside, leading to biased and discriminatory decisions.

But even though these concerns are pertinent and supported by several issues widely reported by the media, the overall trend appears to be positive. Ongoing research shows that the adoption of AI-informed decisions actually reduces the bias in decision making compared to traditional settings in which decisions are made solely by humans.

Bartlett et al (2019) found that for the purchase and refinancing of mortgages, online lenders charge higher rates to minorities — but these rates are 40% less than what a minority person typically receives in face-to-face lending. [4] Cowgill (2018) showed that, with respect to a traditional resume-screening process, a process driven by machine learning solutions selected candidates that were 14% more likely to pass interviews and receive an offer, 18% more likely to accept an offer, and more productive on the job.

Even so, the improvements observed in AI versus traditional settings do not eliminate the risk of biased decisions. In the recent years, media and public attention has been drawn to cases in which the use of machine learning models has resulted in:

  • Predictions of recidivism that discriminated based on specific ethnicities and age groups [5]
  • Gender-biased hiring decisions that undervalue the role of women in the workplace [6]

Despite AI’s otherwise good track record, examples like these clearly demonstrate the need for in-depth assessments that will enable managers to understand the root causes of these biases. Only by getting at the fundamental causes will the biases be effectively monitored, mitigated and, hopefully, eliminated.

Multiples sources of algorithmic bias exist [7], each one requiring specific mitigating actions. The most common sources of these biases, and of their potential remediation, can be summarized as follows:

  • Biased data and algorithms: If past decisions made by humans were biased, the historical data used for models training and validation will be biased as well. Similarly, if discriminatory data or statistically biased algorithms are used, this too may result in discriminatory outcomes. In these cases, one action item is to make sure that data scientists have received appropriate technical training on how to deal with biased data and algorithms, and potentially discriminatory predictors.
  • Biased developers and decision makers: Data scientists, software developers, executives, and end users may introduce biases in the process of developing and using AI solutions. Appropriate training on AI techniques and ethics should be considered. Companies should also consider introducing additional governance and policies specifically for the use of AI.
  • Changes in model response and performance: Before a model is deployed and used to make decisions, controls should be put in place to continuously monitor the model’s performance. These controls should also track the risk of making biased predictions and decisions.

The implementation of AI into the business decision-making process is still in its relative infancy. As evidenced at AOM 2019, ongoing research is continuing to improve understanding of the actual opportunities and threats accompanying the increased use of AI in business.

Research shows that AI-based decision models have the potential to increase the number of jobs available in certain professions, improve the skills of the professionals who fill those positions and, in many cases, to reduce bias and discrimination [8]. But these algorithms are far from perfect and their inner structure is still not understood by the average user. Companies that are expanding their use of AI need to put in place specific AI governance and policies and adopt techniques that increase the transparency of AI algorithms. Only then will the economy of the future be fair and open to all.

References

[1] See the following research papers:

  • Erik Brynjolfsson and Tom Mitchell, “What can machine learning do?”, 2017
  • Erik Brynjolfsson, Tom Mitchell and Daniel Rock, “What Can Machines Learn, and What Does It Mean for Occupations and the Economy?”, 2018
  • Bo Cowgill, “Bias and Productivity in Humans and Algorithms: Theory and Evidence from Résumé Screening”, 2018
  • Edward Felten, Manav Raj and Robert Seamans, “The Variable Impact of Artificial Intelligence on Labor: The Role of Complementary Skills and Technologies”, 2019

[2] Burning Glass data reported during a session at AOM 2019

[3] Burning Glass data, BCG analysis

[4] Robert Bartlett, Adair Morse, Richard Stanton and Nancy Wallace, “Consumer-Lending Discrimination in the FinTech Era”, 2019

[5] https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[6] https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

[7] David Danks and Alex John London, “Algorithmic Bias in Autonomous Systems”, 2017

[8] Bo Cowgill, “Bias and Productivity in Humans and Algorithms: Theory and Evidence from Résumé Screening”, 2018

--

--

Maximiliano Santinelli
GAMMA — Part of BCG X

Passionate about computer and data science, science in general, behavioral economics, food, travel, photography. Some of these interests became a profession.