Member-only story

Deep Learning

The Unreasonable Ineffectiveness of Deep Learning on Tabular Data

And the recent work to address its poor performance on tabular data

Paul Tune
Towards Data Science
14 min readApr 26, 2020

--

Source: Photo by Alina Grubnyak on Unsplash

Deep neural networks have led to breakthroughs in various domains that have long been considered a challenge. Two notable examples are computer vision and natural language processing (NLP).

We have seen speedy development in computer vision, beginning with the breakthrough development of AlexNet, winning the ImageNet challenge in 2012. We have ResNet¹ in 2015 achieving superhuman accuracy for the first time on the ImageNet benchmark dataset. We then witnessed the birth of generative adversarial networks (GANs)² in 2014, to the rapid improvements that ultimately led to the lifelike portraits of fake people from SyleGAN today.

In NLP, deep neural networks models are now state-of-the-art, outperforming conventional machine learning algorithms on benchmark datasets. Models such as GPT-2³ and BERT⁴ are the new gold standard. Google has deployed BERT in its search engine in 2019, the single largest update to its search engine in the past few years. GPT-2 is used in chatbot applications, and in some interesting ways such as a popular text adventure game called AI Dungeon. It wouldn’t be long now that these solutions will become…

--

--

Towards Data Science
Towards Data Science

Published in Towards Data Science

Your home for data science and AI. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.

Paul Tune
Paul Tune

Written by Paul Tune

Machine learning engineer at Canva. I work on computer vision and churn modelling. My interests are data science, information theory, finance and investing.

Responses (9)