The real problems with Artificial Intelligence

Rob Leclerc
3 min readDec 14, 2015

--

Over the past year Elon Musk has ben very vocal about the dangers of Artificial Intelligence. As an encore, Elon Musk, Sam Altman (Y-Combinator) and others joined forces to create a new venture called OpenAI to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

There are a number of problems and misconceptions with Artificial Intelligence. Human-level AI is not one of them.

First, we’ve been here before and recent advances in AI are overhyped. Go back to the 1960s and you’ll hear Marvin Minsky at MIT proclaiming thatWithin a generation .. the problem of creating ‘artificial intelligence’ will be substantially solved.” 50 years later AI is still can’t tell the difference between a leopard and a couch.

The central problem is that we keep trying to engineer AI when we should be trying to reverse-engineer AI to identify the salient principles underlying neural computation and architectural design. And so not surprisingly with every major advance in AI over the last 67 years, neuroscientists invariably come out and say, “Why didn’t you come talk to use 15 years ago. We could have told you that.”

And I don’t expect that these new advances in AI will be any different because all we’ve done is apply a new trick or technique that has enabled an advance. This current generation, like others, will advance asymptotically until we’ve exhausted any meaningful gains. Then the world will proclaim that AI didn’t live up to its promise, and a few years later someone will will (re)discover a new principle of neural computation and the cycle will continue again.

To emphasize how little we actually understand about neural networks, most engineers are still working with fully connected networks representations, where all neurons in one layer of the network are connected to all neurons in the subsequent layer. Unfortunately, these fully connected networks cloud our understanding. They don’t reveal the logic circuity of the neural network, since the mathematical representation of the network carries many spurious (non-zero) interactions.

However, it’s trivial to run a perturbation analysis to remove spurious interactions that don’t contribute to the information processing of the circuit. This is the same for Gene Regulatory Networks (GRNs), which share the same mathematical representation of Artificial Neural Networks. In a 2008 paper I published in Molecular Systems Biology (130+ citations) I showed that when interaction networks were allowed to evolve their connectivity, they would shed spurious interactions while maintaining their function. When you do this you reveal the logical circuity of the network. We have the tools today to better understand the principles of neural computation that would get us closer to human-like AI, but very little work is going into this area.

The final problem with AI, and perhaps biggest existential danger, is not that human-like AI will violently take over. Rather that we’ll willingly hand over our decision making to highly specialized recommendation algorithms. Algorithms which promise to optimize our lives so that we can be our best selves.

These algorithms will tell us what to eat, when to eat, what colors to wear, what sleep schedule we should abide by, what job to we should do, what school we should go to, who we should date and marry, who we should vote for, what do we invest in, what path to take to work. Strip away all of these decisions and what do have left? After a couple generations will we even have the cognitive ability to make our own decisions or question the ultimate decisions of our self-imposed algorithmic overlords?

And perhaps what’s most scary is we’re already there. How often do we not see a movie because it doesn’t have at least a 60% rating on Rotten Tomatoes, or have 500+ reviews on Yelp? How soon before we can codify these recommendations into an algorithm that’s correct 95% of the time? And how many of us have disregarded our own judgement when using Google Maps only to go down a wrong way street, or in the extreme case drive off a bridge? If we surrender our free will, what’s left of humanity.

There are problems with AI but human-level intelligence isn’t one of them.

back in the 50’s Basically this new generation of AI is largely built on newly integrated neural computing principals

to generate some very good pattern recognition algorithms

--

--

Rob Leclerc

Rob is the cofounder of AgFunder, an online investment marketplace for the $6.4 Trillion global agriculture industry.