by Allen Chen, Andrew Mendoza, Gael Varoquaux, Steven Mills and Vladimir Lukic
When AI was first introduced into business processes, it was transformative, enabling companies to leverage the vast amounts of accumulated data to improve planning and decision making. It soon became apparent, however, that integrating AI into business processes at scale required significant resources. First, companies had to recruit highly sought-after (and highly paid) data scientists to create the data models behind AI. Second, the process of building and training the machine learning models that accelerated the data analysis process required a significant expenditure of time and energy. This, in turn, led to the development of automated machine learning (AutoML), techniques that essentially automate core aspects of the machine learning process including model selection, training, and evaluation.
In effect, AutoML seeks to trade machine (processing) time for human time. This automation brings many benefits. First and foremost, it decreases labor costs. It also reduces human error, automates repetitive tasks, and enables the development of more effective models. By reducing the technical expertise required to create an ML model, AutoML also lowers the barriers to entry, enabling business analysts to leverage advanced modeling techniques — without assistance from data scientists. And by relieving data scientists from repetitive tasks of the machine learning process, AutoML frees these costly resources to pursue higher-value projects.
New solutions invariably raise new questions
As data scientists ourselves, we initially thought little of AutoML. Yes, these techniques and tools could produce reasonably effective models. But that was essentially all they could do — and, of course, not without drawbacks. In the early stages, AutoML tools were far less advanced and typically no more complex than what could be implemented by a data scientist using existing tools. These barriers to acceptance were compounded by AutoML’s black box nature, which makes trained models less interpretable and meaningful, and by the difficultly in immediately finding use for it in non-academic settings. Moreover, AutoML suites of tools were far narrower in scope and solved only a portion of the problem — and with little value add.
AutoML has come a long way since then. In fact, it is now ubiquitous in most of the prevailing machine learning libraries, open-source tools and major cloud-compute platforms. Commercially available AutoML tools are making feature engineering and the development of complex machine learning models as easy as a few clicks of the button, enabling business users to deploy these models themselves in a production-ready state. As these more powerful AutoML tools proliferate, new questions arise, such as:
· Should we be using AutoML?
· If so, when should or shouldn’t we use them?
· Can we expect the results to be better than hand-crafted models?
· Can these tools take the next step and replace data scientists altogether?
Blindly optimizing a metric risks enhancing biases
As we assess AutoML, we must recognize that performance is not the complete story, and that bias can play an important role in AI. Taking human data scientists out of the process does not necessarily result in bias-free results. A computer does not, for example, know that there is anything wrong with training facial-recognition algorithms using the faces of white people only — or that the result of doing so is that a phone may fail to unlock when presented with the face of a non-white user. It is therefore the responsibility of data scientists themselves to mitigate these biases by checking and correcting models that advantage one race, gender or protected class over another.
Allowing biases to skew results can have negative consequences for businesses in virtually any industry. An example of bias in health care was recently published in Science magazine. The algorithm in question was designed to see which patients would benefit from high-risk care-management programs. It was, according to the report, the kind of algorithm routinely used to determine care levels for more than 200 million people in the U.S. The article authors found that the algorithm incorrectly determined that fewer Black people than white people were in need of such care programs — even though the Black patients in the data set had 26.3% more chronic illnesses than their white counterparts. The error occurred for two reasons: First, the algorithm used total individual health care costs for the previous year to determine need. Since Black citizens tend to be poorer than white citizens, they spent less on health care regardless of how much care they might actually have needed. Second, the data set used to train the algorithm included seven times more data on whites than it did on Blacks.
Similarly, Reuters noted in 2018 that the algorithm Amazon used for years to drive its hiring process unfairly excluded female candidates. Indeed, the hiring algorithm was trained by analyzing patterns of resumes submitted to Amazon over the previous ten years. Since the vast majority of applicants were men, the algorithm learned that male candidates were more likely to be selected. The algorithm also gave lower scores to resumes that “included the word ‘women’s, as in ‘women’s chess club captain” and downgraded graduates of two all-women’s colleges.
These are just two examples of the potential ways that bias can insinuate itself into business decision making. Given how broadly AI-based processes are used to inform such decisions — some of which affect hundreds of millions of people — companies must be aware of biases and take all possible steps to remove or mitigate them.
The best data science model: Humans + AI
Nonetheless, in spite of the risk posed by undetected biases, we believe that the ease and potential time saving of developing models using AutoML makes it a tool that every data scientist and data-science department should have on hand. It is a low-cost, high-potential tool that, at minimum, provides as solid a performance baseline for hand-crafted approaches. In the best scenarios, AutoML will do this much faster than a human — and produce better models as well — as we will discuss in Part 2 of this series. Data scientists need to be particularly careful that both the assumptions they use to design their models and the data they use to train them do not result in unintended consequences.
A final possible reason for lack of AutoML uptake may be the fact that some data scientists have expressed concern that AutoML will soon make them redundant. This is similar to the concern accountants had in the early 1980s when Microsoft introduced Excel. Rather than put accountants out of work as they had feared, Excel made their jobs easier, automating many of the mundane tasks involved in managing financial documents.
Similarly, we believe that AutoML will make data scientists more efficient. Rather than spending time iterating over and tuning models, data scientists with access to AutoML tools can spend less time on these tasks and more time on higher-value efforts such as applying domain and industry knowledge. Given the paucity and expense of data scientists, this ability to shift resources should be a welcome development for business leaders.
Data scientists can rest assured knowing that not only can they continue to play a central role in AI development — they must continue to play such a role. If companies are to avoid the unforeseen consequences of bias in automation, humans must remain at the center of data modelling.
In the second of this two-part series, we will look at the strengths and limitations of AutoML, and underscore the critical role humans play in AI projects.