By Irving Wladawsky-Berger
Artificial intelligence is rapidly becoming one of the most important technologies of our era. Every day we can read about the latest AI advances from startups and large companies. Over the past few years, the necessary ingredients have come together to take AI across the threshold: powerful, inexpensive computer technologies; huge amounts of data; and advanced algorithms, especially machine learning. Machine learning has enabled AI to get around one of its biggest obstacles — the so-called Polanyi’s paradox.
Explicit knowledge is formal, codified, and can be readily explained to people and captured in a computer program. But, tacit knowledge, a concept first introduced in the 1950s by scientist and philosopher Michael Polanyi, is the kind of knowledge we’re often not aware we have, and is therefore difficult to transfer to another person, let alone capture in a computer program.
“We can know more than we can tell,” said Polanyi in what’s become known as Polanyi’s paradox. This common sense phrase succinctly captures the fact that we tacitly know a lot about the way the world works, yet aren’t able to explicitly describe this knowledge.
Tacit knowledge is best transmitted through personal interactions and practical experiences. Everyday examples include speaking a language, riding a bike, and easily recognizing many different people, animals and objects.
Machine learning, and related advances like deep learning, have enabled computers to acquire tacit knowledge by being trained with lots and lots of sample inputs, thus learning by analyzing large amounts of data instead of being explicitly programmed. Machine learning methods are now being applied to vision, speech recognition, language translation, and other capabilities that not long ago seemed impossible but are now approaching or surpassing human levels of performance in a number of domains.
As its domain of applications continues to expand, machine learning (ML) is raising serious concerns on its impact on automation and the future of work. In What can machine learning do? Workforce implications, an article recently published in Science, MIT professor Erik Brynjolfsson and CMU professor Tom Mitchell explore this question by analyzing which tasks are particularly suitable for ML, as well as its expected impacts on the workforce and the economy. [Read the full Research Paper here, and watch Brynjolfsson explain the concept in this video].
Here is a recap of the Science article:
Which tasks are most suitable for machine learning?
Machine learning systems are not equally suitable for all tasks. It’s been most successful when applied with supervised learning and deep learning algorithms, which require very large amounts of carefully labelled data to be used for training,— e.g. cat, not-cat. While very effective in such domains, the authors remind us that ML systems are significantly narrower and more specialized than humans. There are many tasks for which they’re completely ineffective given the current state-of-the-art.
Brynjolfsson and Mitchell identify eight key criteria that help distinguish tasks that are suitable for ML, from those where ML is less likely to be successful.
- Learning a function that maps well-defined inputs to well-defined outputs. Such functions include classification, — e.g. labeling images of specific animals or the probability of cancer in medical records; and predictions — such as the likelihood of defaulting on a loan application. These amount to statistical correlations without necessarily capturing causal effects.
- Large (digital) data sets exist or can be created containing input-output pairs. The bigger the training data sets, the more accurate the learning. One of the key features of deep learning algorithms is that, unlike classic analytic methods, there’s no asymptotic data size limit beyond which they stop improving.
- The task provides clear feedback with clearly definable goals and metrics. “ML works well when we can clearly describe the goals, even if we cannot necessarily define the best process for achieving those goals.” ML is particularly powerful when there are specific, system-wide performance metrics — e.g. get the most points in a video game, optimize the overall traffic flow of a city — and such metrics can be incorporated in the training data.
- No long chains of logic or reasoning that depend on diverse background knowledge or common sense. “ML systems are very strong at learning empirical associations in data but are less effective when the task requires long chains of reasoning or complex planning that rely on common sense or background knowledge unknown to the computer.” ML does well in situations that require quick reaction and provide quick feedback like a video game. It does less well in events that depend on the context established by multiple previous events.
- No need for detailed explanation of how the decision was made. Explaining to a human the reasoning behind a particular decision or recommendation made by a machine learning algorithm is quite difficult, because its methods — subtle adjustments to the numerical weights that interconnect its huge number of artificial neurons — are so different from those used by humans.
- A tolerance for error and no need for provably correct or optimal solutions. ML algorithms derive their solutions based on statistics, assigning probabilities to the different options it evaluates. It’s rarely possible to train them with 100% accuracy. Even the best ML systems make errors — as do the best humans — so it’s important to be aware that they’re not perfect.
- The phenomenon or function being learned should not change rapidly over time. “In general, ML algorithms work well only when the distribution of future test examples is similar to the distribution of training examples.” If the function changes rapidly over time, retraining is typically required, requiring the acquisition of new training data.
- No specialized dexterity, physical skills, or mobility required. ML systems have already surpassed human levels of performance in a number of tasks. However, while the digital AI brains of robots are doing quite well, their physical capabilities are still quite clumsy compared to humans, especially when dealing with unstructured tasks and environments.
The Science article includes fairly elaborate supplementary materials to help evaluate what the current generation of ML systems can and cannot do.
Since the 1980s, U.S. job opportunities have sharply polarized. Mid-skill occupations involving routine manual (blue-collar) and cognitive (white-collar) tasks have been declining because they’re prone to automation and to outsourcing to lower-wage countries. At the same time, we’ve seen the steady growth of jobs involving non-routine, low skill manual tasks — e.g. food and cleaning services, personal care and health care aides — and non-routine, high skill cognitive tasks— e.g. managerial, professional and technical occupations.
We’re now entering a new era of automation. Our increasingly smart machines will be automating a broader set of tasks over the coming years. Looking at routine versus non-routine to predict which tasks are suitable candidates for automation is no longer enough.
A much broader set of tasks are now becoming candidates for ML automation. “Thus, simply extrapolating past trends will be misleading, and a new framework is needed.”
Most occupations involve a number of activities or tasks. Some of these activities are more susceptible to automation, while others require judgement, social skills and other hard-to-automate human capabilities. But just because some of the activities in a job have been automated, does not imply that the whole job has disappeared. To the contrary, automating parts of a job will often increase the productivity and quality of workers by complementing their skills with machines and computers, as well as enabling them to focus on those aspect of the job that most need their attention.
According to a recent McKinsey study, while almost half of all activities could be feasibly automated by 2030 by adapting currently available technologies, few occupations are likely to disappear entirely. Instead, a growing percentage of occupations will experience significant changes.
“Although economic effects of ML are relatively limited today, and we are not facing the imminent end of work as is sometimes proclaimed, the implications for the economy and the workforce going forward are profound…” write Brynjolfsson and Mitchel. “The recent wave of supervised learning systems have already had considerable economic impact. The ultimate scope and scale of further advances in ML may rival or exceed that of earlier general-purpose technologies like the internal combustion engine or electricity. These advances not only increased productivity directly but, more important, triggered waves of complementary innovations in machines, business organization, and even the broader economy.”
“Individuals, businesses, and societies that made the right complementary investments — for instance in skills, resources, and infrastructure — thrived as a result, whereas others not only failed to participate in the full benefits but in some cases were made worse off. Thus, a better understanding of the precise applicability of each type of ML and its implications for specific tasks is critical for understanding its likely economic impact.”
Originally published at blog.irvingwb.com on July 23, 2018.