Automatic Machine Learning: Learning How to Learn
AlphaD3M, a new AutoML model, reduces computation time from hours to minutes
Iddo Drori, CDS Adjunct Associate Professor, Kyunghyun Cho, Assistant Professor of Computer Science and Data Science, Claudio Silva and Juliana Freire, Professors of Data Science, Computer Science, and Engineering, Remi Rampin, CDS Research Engineer, and Yamuna Krishnamurthy, Raoni de Paula Lourenco, Jorge Piazentin, NYU Tandon School of Engineering, contributed to the recent paper, AlphaD3M: Machine Learning Pipeline Synthesis.
The paper introduces AlphaD3M, an automatic machine learning (AutoML) system whose objective is to learn how to learn via self-play. The “D3M” in AlphaD3M’s name comes from DARPA’s Data Driven Discovery of Models (D3M) program, which has propelled machine learning toward solving any user-specified task, given any dataset. This goes beyond the traditional vision, in which AutoML solved a task given a dataset, a well-defined task, and performance criteria.
AlphaZero, Alphabet’s DeepMind-produced system that taught itself chess, inspired the design of AlphaD3M’s system. AlphaD3M, like AlphaZero, uses a single-player pipeline synthesis model. Instead of AlphaZero’s single operation, “move,” AlphaD3M offers the player three actions (insertion, deletion, and replacement) to synthesize a working pipeline. The model utilizes a recurrent neural network (RNN) with long short term memory (LSTM). The network takes training examples as inputs, and outputs probabilities and an approximation of pipeline performance.
The researchers note the benefits of their approach to the pipeline model, which include easy explanation of the working pipeline because of the simple edit operations used to obtain it. The pipeline model uses deep reinforcement learning through self-play, and a Monte Carlo Tree Search (MCTS) algorithm, along with the RNN. These components provide the system with the functionality to project solutions and learn patterns.
Because the researchers represented the metadata and pipeline chain as “state,” their model can be viewed as an entire board game configuration. AlphaD3M performed competitively when compared against state-of-the-art AutoML methods including Autosklean, TPOT, and Autostacker. Researchers concluded that AlphaD3M, whose neural network takes advantage of GPUs, is a full order of magnitude faster than existing AutoML methods, decreasing computation time from hours to minutes. AlphaD3M also outperforms SGD, the baseline pipeline, in 75% of datasets. In only 7% of cases does AlphaD3M perform worse than SGD.
By Sabrina de Silva