Big Data is Dead. All Aboard the AI Hype Train!

It’s 2016, and businesses big and small, far and wide have finally stopped using the term Big Data.

The consensus seems to be converging on the idea that data alone doesn’t solve problems. It’s true. You still need to understand, analyze, and test test test data using hypotheses to prove intuitions and make solid decisions. Things that should be happening regardless the size of your data.

This is not how AI works

But instead of developing creative uses for the data that we have, we’re all now looking to ‘cognitive computing’ and ‘artificial intelligence’ to save us. Companies like Google, Facebook, Microsoft, and IBM are all having an arms race against each other trying to outsmart and out-engineer each other for better accuracy. Meanwhile, marketing teams with lots of money have entranced us all with the possibility of having computers think for us, tell us what our problems are, and auto-magically fix them and improve our business processes.

I mean it’s twenty-goddamn-sixteen up in here. No one wants a flying car anymore unless it’s driving itself.

Yes, Kalev H. Leetaru actually said that

In response to the death of Big Data, companies who need to sell more stuff are now telling us now that you have this data, what your business really needs is analysis done by super-fast, omniscient computer brains. Which is a nice idea, but ‘artificial intelligence’ isn’t anywhere close what most people consider it to be.

Raise your hand if you’ve seen PowerPoint slides like this

Near the end of last year, analysts were proclaiming that 2016 is the year the algorithms make data useful. Gartner made headlines by proclaiming “Data is Dumb. Algorithm is Where the Value Lies.” IBM seems to allude to the notion that it can help Bob Dylan improve songwriting in TV ads. And nearly everyone’s afraid these AI algorithms will eventually destroy the world.

Sanity check: If these algorithms are so smart and therefore valuable, why are Facebook and Google (and scikit-learn) giving away their state-of-the-art algorithms for free?

Consider how Google operates. The MapReduce paradigm was so crucial to Google’s core business that its very existence was kept close to the vest. It was a key business driver and led to enormous growth within the company. When Google decided to reveal and give away MapReduce, they were so far ahead of the data parallelization game that they didn’t need it anymore.

Following that logic, Google giving away their “AI engine” TensorFlow should mean that Google already has something that is so mind-blowing that it should be able to tell what you’ll have for dinner tonight.

Or perhaps the more likely explanation is that Google has no idea how to extract value from it. I mean, other than recognize pictures of cats.*

I know it’s a very bold statement to make. But in practice, neither Google nor Facebook have found a way to use their “artificial intelligence” superpowers to improve their core business: getting me to view or click their ads.

IBM’s “cognitive computing platform” Watson is in a similar situation. Sure, it did a great job of retrieving facts and winning Jeopardy in 2008, but quickly faded into relative obscurity. In 2014, IBM put together a $100MM fund to help app development for Watson, and all they seem to have have to show is 8 featured apps on their home page, none of which I completely understand. Not even a giant pile of money couldn’t bring a high-visibility app to Watson. Curious.

In the Bob Dylan ad, Watson claims it can read millions of documents quickly, with the only conclusion being that Dylan’s major themes are time passes and love fades. Dylan then suggests they write a song together, to which, Watson evades the suggestion in its sole stroke of brilliance by saying, “I can sing.”

https://en.wikipedia.org/wiki/List_of_burn_centers_in_the_United_States

While it is an exciting research field, AI in its current state is nothing more than just algorithms—math instructions. Algorithms are fast. Algorithms are often elegant. But algorithms are still dumb. Even when they’re “self-correcting,” they still need an immense leveraging of human intelligence and input to do something simple. It is currently nowhere near the levels promised in Wired, TechCrunch, or Gartner reports.

“By 2030, 90% of jobs as we know them today will be replaced by smart machines,” again, Gartner. LOL.

¯\_(ツ)_/¯
— Silicon Valley

There is an inherent belief ‘round these parts that all problems (ie. world hunger) can be engineered away if you code enough lines, and that intelligence is just a matter of sufficiently-programmed algorithms. Pump enough Big Data™ into this Artificial Intelligence Engine™ and all your problems will be solved by intelligent computers. But I believe intelligence requires much more than just engineering and processing power. Intelligence has overtones of curiosity, problem-solving (and problem-creation), and a touch of insanity, none of which have been replicated in any AI lab.

“At the risk of overgeneralizing, the CS majors have convinced each other that the best way to save the world is to do computer science research.” —Dylan Matthews, Vox

Edit: Added March 16, 2016

PS. Feel free to ping me on twitter or LinkedIn with your thoughts.

* Update: Go. They taught it to play Go.

I write about logic and reason, data and people.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store