Adam Blum
Adam Blum
May 25 · 2 min read

Shortly after Microsoft released the AutoML product a few months ago, we published a comparison of Auger.AI versus Azure’s accuracy on Microsoft’s own chosen datasets. Specifically in their paper behind their AutoML approach, Microsoft chose 89 OpenML datasets. As Azure did, we then compared that accuracy against H20 and TPOT as well. The results showed Auger with 4.5% better accuracy than Azure, each running experiments (evaluating many algorithm/hyperparameter combinations) for one hour on identical hardware. H20 and TPOT actually slightly outperformed Azure as well: Auger was 3.5% better H20’s accuracy and 4.0% better the TPOT’s results. Interestingly Auger even outperformed the handbuilt models from the OpenML competition winners by 4.4%. Since the average accuracy is 80%, the roughly 4% increases in accuracy are actually reducing errors by 20%. Since machine learning practitioners usually struggle to get fractions of percentages better on such contests (witness how close all other AutoML solutions besides Auger are in accuracy), such accuracy increases are truly astounding. We describe the techniques that enable such better performance here.

Recently Google finally released their own true AutoML product called Google AutoML Tables (last year’s AutoML release was really Neural Architecture Search). We did attempt to run Google AutoML Tables against the “Microsoft 89”. Unfortunately due to Google AutoML Tables many limitations, we were only able to run it against a portion of those datasets. Based on the results on those datasets, Google was unquestionably the laggard of all AutoML products: Auger outperformed Google by 6.8%.

But these were after all Microsoft’s cherrypicked datasets (even though Microsoft’s performance was second worst). Should we let Google cherrypick their own datasets for comparison as well? We had to wait a bit for Google to do so, but finally on May 9th Google published some results for AutoML Tables against a handful of Kaggle competitions (just six Kaggle competitions chosen would presumably mean its even more filtered than Azure’s subset to highlight Google’s accuracy). Auger again outperforms Google by an average of 3%.

We are excited to see yet more entrants into the AutoML space. For dedicated Google Cloud and Azure customers, these two products may have some appeal. Nevertheless the purpose of AutoML (versus just generic machine learning usage) is to increase accuracy. So a product such as Auger.AI that yields 3 to 5 percent better accuracy (and hence generally 15 to 50 percent reductions in errors) should be appealing to the majority of machine learning practitioners.

@auger

Auger Blog

Adam Blum

Written by

Adam Blum

CEO of Auger.AI — Technical co-founder of startups, father of kids, writer of books, racer of ultramarathons. Building ML tools in four different decades…

@auger

@auger

Auger Blog

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade