Yet Another AI Hype Cycle?
In my recent book, Silicon Collar, I profiled a number of examples of machine learning, cognitive computing and other evolving artificial intelligence. With growing and diverse data sets and massive computing power we have never had so much opportunity to train machines. In many ways it is an exciting time for AI.
However, I also pointed out we have had many, many false starts
“Since the 1950s! That is when Alan Turing defined his famous test to measure a machine’s ability to exhibit intelligent behavior equivalent to that of a human. In 1959, we got excited when Allen Newell and his colleagues coded the General Problem Solver. In 1968, Stanley Kubrick sent our minds into overdrive with HAL in his movie, 2001: A Space Odyssey. We applauded when IBM’s Deep Blue supercomputer beat Grandmaster Garry Kasparov at chess in 1997. We were impressed in 2011 when IBM’s Watson beat human champions at Jeopardy! and again in 2016 when Google’s AlphaGo showed it had mastered Go, the ancient board game. Currently, we are so excited about Amazon’s Echo digital assistant/home automation hub and its ability to recognize the human voice, that we are saying a machine has finally passed the Turing Test. Almost.”
And many smart folks say we have still barely scratched the surface in terms of understanding the human mind. Yale computer science professor David Gelernter writes in his book The Tides of Mind about “the spectrum of consciousness,” As we go down that spectrum we “prefer narrative to logic, and cross eventually into the difficult-to-remember realms of dreams.” Today’s AI is only focused on the higher areas of the spectrum he talks about.
Yann LeCun, director of AI research at Facebook, has commented, “Despite these astonishing advances, we are a long way from machines that are as intelligent as humans — or even rats. So far, we’ve seen only 5% of what AI can do.”
And yet, we see a huge increase in hype about AI
IBM started marketing Watson five years before it should have. They are finally gaining some momentum, and claim by end of 2017 “we’ll have a billion people touched by Watson”. So, why then confuse matters by announcing a collaboration with Einstein, the AI portion of the Salesforce platform? I read this Fortune interview with the two CEOs Ginni Rometty and Marc Benioff and I could not figure out how or why they would work together. Indeed Ginni jokes the two AI brand names make for good comedy. More ominously the partnership press release announced “IBM will deploy Salesforce Service Cloud across the company to transform its global product support services and gain a single, unified view of every IBM customer.” Is it really, truly about AI?
In researching my book I came across many technologists who claim today’s AI can do much of what CPAs, architects, even surgeons can do. But when I asked if they had looked at daily tasks and skills and attributes the jobs required, few had bothered. If they had they would have learned today’s CPA does very little book keeping, but is instead making all kinds of judgment calls about internal controls and ever changing accounting standards, counting inventories and cash balances, confirming data with third party sources among other tasks. For a machine to do all those tasks you would need a “frankensoft” with some cognitive computing skills, some robotic skills, some camera/scanning skills and drone like visualization among other attributes. Could some one put such a machine together? Sure, but at what cost? Not cheap or reliable enough otherwise accounting firms would not be hiring accounting graduates at record levels. Not just accounting, few jobs any more involve doing the same tasks over and over all day long.
Cambridge U has a “Centre for the Study of Existential Risk”. It is “dedicated to the study and mitigation of human extinction-level risks that may emerge from technological advances and human activity. “ They are spending fair amount of time thinking about risks from AI such as “automated hacking, the use of AI for targeted propaganda, the role of autonomous and semi-autonomous weapons systems, and the political challenges posed by the ownership and regulation of advanced AI systems. “ Color me cynical that AI will be that advanced any time soon. To me, the risks from pandemics, asteroid direct hits and crazy dictators require far more immediate attention.
But why blame just them? We are living in a time of AI hype. Bloomberg says mentions of AI in corporate earnings call transcripts have spiked dramatically in the last couple of years.
In this group-think about AI’s prowess, here’s what we need to watch for. Time wrote “(Yale’s) Gelernter is vastly outnumbered — so much so that he worries that his ideas might simply be ignored. ‘There has never been more arrogance and smugness’ than in today’s self-congratulatory scientific culture, he asserts.”
Personally, I would rather see modest use cases of the kind Watson is starting to show. Gradual progress is better than grandiose promises we have been making for seven decades now.