The Real Threat of Artificial Intelligence

Todd Cullum
3 min readSep 20, 2016

--

I know what your thinking… This is going to be another one of those “Skynet is going to become self aware and kill us all” posts… Believe me, I’ve read my fair share and do so at your own risk, trust me. I’ve heard it all… Elon Musk, Bill Gates, Stephen Hawking, Nick Bostrom, and more… This huge fear that somehow we’re going to make a computer that is not just an AI, but it has the ability to recursively make itself smarter, therefore jumping so far ahead in intelligence that it makes us look like ants to a human being… A computer that can solve all problems.

Ooooooooooooooooooooooooooooooooooooooohhhhhhhhhhhhh.

Seriously though, this is the newest cool thing to be worried about on the block if you’re a billionaire, philosopher, or theoretical physicist apparently. I’ve been on a quest to try and get to the bottom of how realistic these horror stories are, especially having lived through Y2K unscathed, and having read the story of The Tower of Babel, where our ancestors thought they would reach heaven by climbing to the top of a tower.

In any event, I did make a few interesting discoveries thus far: The people creating all of the fear and hype are not AI experts. In fact, people who actually work with AI on a regular basis not only don’t believe that there is going to be a SkyNet any time soon (or ever), their projections for when we may reach this so-called “artificial general intelligence” tends to be farther out than the journalists and paranoid non-experts act like. I mean, let’s face it… Elon Musk and Stephen Hawking are extremely smart, successful people, but they are not artificial intelligence computer scientists by any means.

Plus, I’ve sorta read a broken explanation of how this artificial superintelligence threat would arise. I’ve read articles which say that a computer could overnight recursively become smarter so that the next day it is 12,000x more intelligent than the smartest human, however, no explanation for how the computer could act on this, and no explanation for how the computer would go through the physical trial-and-error process of learning and improving: because we all know that there is a trial and error process involved.

All around the internet, there is broken information scattered everywhere. But one thing is for certain: Nobody really seems to know exactly how it will pan out. So the most logical thing would be to look at history: Us humans have a tendency to see an unknown coming and jump to ridiculous fearful conclusions. Remember Y2K? And let us not forget that there are enough nuclear weapons stockpiled in the US and Russia to wipe us all off the face of the planet right now, no AI needed.

However, I do see a much more likely threat coming from AI. A simple Google Search shows the steady rise of global population, right? We’re all healthy and having children more and more… And now we’re trying to create artificial intelligence to handle work for us.

I’m no rocket scientist, but does anyone see a problem here? Not to mention, a decent amount of us struggle to take care of our own humanly concerns, let alone sending a bunch of non-humans out into the workforce. Definitely a more likely concern than the robot apocalypse… At least for this century. I mean, should we really be spending billions of dollars on getting machines to do work for us when so many of us are struggling to keep work as it is?

Of course there are a zillion and one arguments of how the AI will save us all and we will have effectively created God (not holding my breath) from a whole lot of people who have no AI experience (go figure), but I leave you with food for thought…

--

--