Artificial Intelligence — To fear or not to fear..?
While Artificial Intelligence as an academic study was founded as long ago as 1956, the term Artificial Intelligence, more commonly referred to as AI, is a relatively new term in the lexicon of most people. For many, images conjured up when hearing the term are those of robots, self-driving cars and a maybe their smartphones. These are indeed good examples of AI implementations, but they are but a small fraction of what AI does today and what experts in the field say AI will be capable of in the future. Today’s AI however, is not your father’s AI, not even 5 years ago AI. Today, and rapidly developing, there are eight (8) scientifically recognized AI sub-domains where the latest advances in AI algorithms and computing power are being brought to bear. These are Transportation, Home/Service Robots, Healthcare, Education, Low-resource Communities, Public Safety and Security, Employment and Workplace, and Entertainment.
Check back in 12 months and this list will have most certainly expanded…
The latest and most impactful generation of advanced AI algorithms in these domains include Machine Learning (ML), Natural Language Processing (NLP) and Image Recognition (IR). It is this current generation of Artificial Intelligence technology that is beginning to permeate many aspects of business and our persoanl lives. Common examples of these increasingly pervasive technologies include the AI home agents Siri and Alexa, by Apple and Google respectively. The voice recognition algorithms of both make use of the latest AI Natural Language algorithms, advanced computing power and massive amounts of data to perform interactions and functions with other internet connected devices in the home and across the internet to serve up information. This same voice recognition and translation technology and that of image recognition, is now being deployed into the current generation of smartphones and will likely soon find its way into many of the other devices we interact with every day — cars, appliances, tools, personal computers etc. Indeed, early voice and image recognition capabilities are already available in Microsoft’s Windows and Apple’s iOS latest operating systems.
Perhaps more importantly though are the most recent advancements involving AI’s highly reliable image interpretations which is leading to higher degrees of early detection and diagnosis of human and crop diseases, on both micro and macro scale.
It’s only natural for people to try to relate their understanding of this new technology in the context of how it may impact them personally. Others will go further and contemplate the implications beyond self .
How will these technologies affect my family, my friends, my community, our country, and the world as I understand and perceive it today.
With little more than a novice understanding of robots and self-driving cars, along with these latest voice and image recognition permeations, it is easy to expect the uninformed, the imaginative and those with only a basic understanding of AI, to worryingly contemplate their future fate in a world of artificially intelligent machines.
The most common perceptions people have of AI today are those which include visions of a future where AI driven machines steal jobs from humans and by extension pose an existential threat to mankind. These perceptions are too often fueled by sensational journalism.
The concerns and fears resulting from these perceptions are misplaced and have been formed by a lack of understanding of the technology. Artificial Intelligence does not pose a threat to jobs or to the existence of mankind. Instead, AI represents one of the greatest all-time technological leaps forward for mankind.
With the introduction of nearly every significant technological advancement during this Modern Era came doomsday cries from the public and sometimes within science. The introductions of electricity, motorized transportation, flight, mechanized agriculture, computers, space travel and robotics all came with their fair share of detractors and doomsayers. In every case there were those who failed to grasp the significance of the technology and its potential for improving the human condition. Instead, oftentimes these people seized upon their fears and the hysteria of others and viewed these technologies as a threat to their jobs and in some cases to civilization itself. Unfortunately, the introduction today of the latest advancements in Artificial Intelligence is being received with similar sentiments. A study conducted by the AAAI (Fast and Horvitz 2017) shows that public and scientific optimism toward AI has improved since 2009 however, at the same time pessimism around large scale job loss due to AI has increased.
When it comes to AI and the public’s concern over job loss related to its pervasive encroachment into work and daily life, pundits have in many regards failed to deliver a compelling case for larger public optimism.
Automation driven by its artificial intelligence component, most experts agree, will result in significant change and disruption in the job market. Where agreement is divided however, is on the question of AI’s relative impact on actual job loss. Whereas some are convinced that millions of jobs will be lost, and never to return, others maintain that jobs will mostly shift to other areas of focus where automation and AI cannot offer an efficiency in time, cost or quality. Given the velocity of AI’s current advancements, and the public’s growing awareness of them in ever more aspects of everyday life, it’s easy to understand why this divide in opinion on AI’s impact to job loss exists.
Author James Surowiecki writes in his September 2017 Wired magazine article, Robopocalypse Not, “It’s a dramatic story, this epoch-defining tale about automation and permanent unemployment. But it has one major catch: There isn’t actually much evidence that it’s happening.” He points out that if automation were already transforming the economy as some proclaim, two indicators would be true:
1.) Aggregate productivity would be rising sharply.
2.) Jobs would be harder to come by than in the past.
Surowiecki explains that relative to aggregate productivity; with the exponential efficiency gains that automation provides, we should already be seeing trends supporting higher levels of production productivity. In the article he provides productivity trends supported by current National economic statistics that do not support a position of rising productivity. In fact, he says the data shows that over the last 10 years U.S. productivity has been “dismally low” as compared to prior decades.
Concerning the second indicator, current & trending unemployment levels, Surowiecki highlights the fact that current National unemployment is low and is at a pre-2008 Great Recession rate below 5% and that many States are complaining of labor shortages, not surpluses. He also shares how wages, while paltry, are actually rising faster than both inflation and productivity.
As a bonus data point, Surowiecki provides us yet a third indicator well worthy of consideration. It’s something that economists call “job churn”. Job churn to economists is what’s seen when people move from job to job and company to company as their jobs are lost — read job displacement related to automation in the current context. The author provides evidence detailed in an article of the Information Technology and Innovation Foundation journal, that job churn is at historical lows and are far below the period immediately preceding the Great Recession. Median job tenure today is actually similar to what it was in the 1950’s, an era thought of by economists as the pinnacle of job stability he says.
Surowiecki points to two historical trends that has likely greatly influenced the public’s false perception that jobs have already been lost to automation. He provides that between 2000 and 2009 over 6 million U.S. manufacturing jobs were lost and wage growth across the economy stagnated.
During the same period, industrial robotics were becoming more widespread, the internet was transformational and AI became much more useful.
He states that as a result, “It seemed logical to connect these phenomena: Robots had killed the good-paying manufacturing jobs, and they were coming for the rest of us next.” Not so fast though. He points out that at this very same time China entered the World Trade Organization and massively ramped up manufacturing production capacity. He states that it is this, not automation, that really devastated American manufacturing. He cites respected economist Dean Baker on this point as saying;
“If you want to know what happened to manufacturing jobs after 2000, the answer is very clearly not automation, it’s China.”.
The author agrees that automation may indeed affect the kind of work people do but advises that at the moment, it’s hard to see that it’s leading to a world without work.
Respected AI expert and MIT professor Andrew McAfee is quoted by Surowiecki on this point. When revising his prior position and public statements on the subject of automation and jobs, McAfee said;
“I would put more emphasis on the way technology leads to structural changes in the economy, and less on jobs, jobs, jobs. It’s the shift in the kinds of jobs that are available.”
Surowiecki sums up the paradox. He explains that we’re afraid of two contradictory futures at once. On the one hand we’re told that robots are coming for our jobs and that their superior productivity will transform industry after industry and that if that happens, economic growth will soar and society as a whole will be vastly richer. But at the same time, he points out; we’re told that we are in an era of stagnation, stuck with an economy that’s doomed to slow growth and stagnant wages and that in this world we need to worry about how we’re going to support an aging population and pay for various health and social programs because we’re not going to be much richer in the future than we are today. Both of these futures are possible but they can’t both come true he says.
A common opinion on this topic is echoed by many experts in the field of Artificial Intelligence. It’s an opinion that AI will indeed impact the kinds of work people do today but that those people will become more valued for performing judgment and creativity related work and less predictive task based work. Predicitive tasks AI can largely replace but cannot replace human judgement or creativity. This difference in jobs skills is a very important distinction and it speaks to what it will take to correct the public’s perception that jobs are being lost to Artificial Intelligence.
In their paper titled “What to Expect from Artificial Intelligence” authors Agrawal, Gans and Goldfarb draw the distinction between what AI does well, what humans do well, and explain how the two are complimentary in nature.
Tasks where the desired outcome can be easily described, and there is limited need for human judgement, are generally easier to automate. For other tasks, describing a precise outcome can be more difficult, particularly when the desired outcome resides in the minds of humans and cannot be translated into something a machine can understand. They advise that in cases where whole decisions can be clearly defined with an algorithm, we can expect to see computers replace humans. They go on to state that the future’s most valuable human job skills will be those that are complimentary to prediction-focused AI agents — in other words, those skills related to judgment and creativity. To adopt AI successfully, training and education for humans will need to shift from a focus on prediction-related skills development to one of judgement and creativity related skills.
Jeanne Ross, principal research scientist for MIT’s Center for Information Systems Research, states in her July 14th 2017 MIT Blog article, The Fatal Flaw of AI Implementation, that researchers and practitioners are finding that AI applications augments, rather than replaces, human efforts. And she advises that in doing so, they demand changes in what people are doing. She finds that AI eliminates many non-specialized tasks while creating skilled tasks that require good judgement and domain expertise — a human skill. Reinforcing the Agrawal, Gans and Goldfarb position that the future’s most valued human labor skill will be around judgement; Ross stresses that in a world of AI, companies will need people who can use probabilistic output to guide actions that make the company more effective.
In Stanford University’s One Hundred Year Study titled, Artificial Intelligence and Life in 2030, a highly distinguished panel of authors predict that AI will likely replace certain tasks rather than whole jobs in the near term, and will also create new kinds of jobs. They highlight that new jobs are harder for us all to imagine in advance than the existing jobs that will likely be lost.
The study stresses that the measure of success for AI applications is the value they create for human lives and that the ease with which people use and adapt to AI applications will likewise largely determine their success.
As Microsoft CEO Satya Nadella points out in his new book, Hit Refresh, when speaking of AI, “the trajectory of AI and its influence on society is only beginning and while there is no clear road map for what lies ahead, in previous industrial revolutions we’ve seen society transition, not always smoothly, through a series of phases.”. Nadella describes how these phases follow very predictable patterns rooted in historical precedence — First, we invent and design the technologies of transformation, which is where we are today with AI. Second, we retrofit for the future, a phase we’ll be entering shortly — imagine if you will the retrofitting of the tens of millions of automobiles on the road today to be driverless. Third, we navigate distortion, dissonance, and dislocation — jobs evolve and reinvent themselves to meet the expected and unexpected roles created. It’s this 3rd and last phase that speaks most directly to the question of jobs. To be sure, this phase will raise many new and unanswered questions.
Fortunately for us all, an historical framework exists for proactively and collaboratively addressing many of the concerns already being raised and for establishing a path forward, that while not void of trepidation, can help to more effectively ease the world’s population into what I will term the “New Age of Artificial Intelligence”.
This new age of Artificial Intelligence most experts believe, is perhaps the greatest technological advancement of our time. The foreseen and unforeseen applications for AI would seem to only be limited by our individual and collective imaginations. Its potential for good for all of mankind is extraordinary. As with any new advancement, especially one in this digital age, the opportunities for nefarious and corrupt uses is equally real. As much as we appreciate the value of technology, we’re equally reminded almost daily it seems, of its ill’s. We should be mindful of this and ensure that while on the precipice of these monumental changes, we seize upon the greatest opportunity to build bulwarks against the more sinister uses of AI. As good software designers and security engineers are often quoted as saying when designing secure algorithm’s; “Better to bake it in now, then to tact it on later.”
Because of its anticipated far reaching impact on society, the Stanford One Hundred Year Study on Artificial Intelligence was commissioned. The study, as described by the distinguished panel, is a long-term investigation of the field of Artificial Intelligence and will study its influences on people, their communities, and society. It was chartered to provide the world with broad insight into the field and develop expert-informed guidance to political, business, scientific, and civic leaders on AI’s advancements, societal challenges and opportunities. It will also make policy recommendations for its safe and effective use with the purpose of benefiting the greater good of all mankind. A noble and worthy endeavor to be sure, yet nothing short of an absolute imperative if we are to be successful in our embrace of Artificial Intelligence.
We’ve seen how current perceptions surrounding the public’s fear of Artificial Intelligence have been shaped by historical events. We also see how this is not an uncommon occurrence in terms of the public’s thinking when revolutionary and societal changing technologies arrive on the scene. Protecting how we work and interact with the world is a human trait important to survival. As such, with AI’s disruptive introduction to society we should expect people to question its long-term impact to our jobs and human existence.
Artificial Intelligence over the last 20 years has not had a negative impact on jobs nor any appreciable overall efficiency improvement on the U.S. economy. — both key markers we would expect to see changes in if it were having an impact.
Furthermore, today’s experts in the field mostly agree that AI is largely evolving as complimentary to the work that humans do, not a replacement for. Admittedly, they also point out that we can’t yet predict the longer-term impacts AI will have on the types of jobs people will hold. They do seem to agree however, that the workforce needs to begin to focus training around judgement and creativity based skills as opposed to predictive task based skills.
We’ve learned that these same AI experts are working with a cross-section of industry, government, academia, civic, business and technology leaders to steer the field of Artificial Intelligence and its applications in a direction that serves humankind in positive and predictable ways that benefit society and mankind as a whole.
While we can all agree that Artificial Intelligence will be disruptive and impactful to jobs, largely jobs will not be lost to AI. Instead, AI will evolve to be complimentary to the work we humans do. When writing of history made at Kitty Hawk, Nadella said;
“It was man with machine — not man against machine.”
Today we don’t think of aviation as artificial flight, its simply flight. In this same way, we shouldn’t think of technological intelligence as artificial, but rather as intelligence that serves to compliment human capabilities and capacities. As such, nor should we fear it will take our jobs or threaten, without adaption, our very existence.