AI: The next leap in tech, or the end of mankind?

Peter Caruana
LassondeSchool
Published in
5 min readJan 4, 2018

--

Most AI alarmists are not computer scientists, and that’s a problem

Recently, it seems that more and more prominent figures in science and technology are chiming in on one of the hottest topics today: Artificial Intelligence. Everyone from billionaire tech mogul Elon Musk, who called it a “bigger threat than North Korea,” to world renowned physicist Stephen Hawking, who warned that AI has the possibility to end mankind entirely.

As an aspiring computer scientist, I can only let out another sigh as I wait for the next celebrity to add to the AI apocalypse hysteria.

One might be justified in feeling uneasy at the idea that right now, hundreds of corporations and governments are devoting huge amounts of resources into researching and developing the antithesis of humanity. I can quite confidently say that this is not the case.

I will admit, I have a bit of a personal gripe with the AI alarmists. My issue largely stems from the fact that few, if any, have a background in the relevant subject matter, or even in computer science in general. I really like Neil DeGrasse Tyson, but there is a reason you don’t see me trying to talk to you about astrophysics. This is especially frustrating since the vast majority of AI researchers and experts disavow the notion of super-intelligent AI bringing upon the techno rapture. Trust me folks, Skynet isn’t rising anytime soon.

The Robot Uprising?

So what is Artificial Intelligence? Are researchers trying to recreate HAL 9000 from Stanley Kubrick’s 2001: A Space Odyssey?

I think the greatest block in understanding the topic is the question of what exactly is meant by “intelligence”.

In computer science, we define intelligence as a system capable of perceiving its environment and able to act in order to achieve a defined goal, or otherwise simulate intelligent behaviour. You can probably tell this is very different than what we typically think of as intelligence.

Machine learning, which is what we currently use to implement AI, is simply giving computers a set of data and having them learn autonomously in order to solve a defined problem. A commonly used and powerful example is a Neural Network, which I will get into in a little bit.

What many of the alarmists seem to not understand is the sheer magnitude of what it would require to try to replicate human intelligence. You can think of anything that an AI system solves as a problem. Now, imagine all the things a human can perceive and all the ways we can react to our environments, each as a distinct problem. (We’re going to have to put on our mathematician’s hats for a moment, so bear with me here.)

The set of all possible variations of a problem are (at least in real-world cases) infinite, so for AI to be able to understand and react to its environment it has to be able to abstract. This could mean something like knowing the difference between a random set of coloured pixels, and the number three within a picture. It then has to be able to abstract and understand the infinite set of problems, each problem containing within it an abstraction of an infinite set, an even bigger infinity.

The question becomes whether this large infinite set falls within one of the infinite sets of problems that are even computationally possible. As Steve Wozniak said, when asked how he got over his fear of AI:

“We don’t really even know what intelligence is.”

Which leads me to my next point.

We Don’t Even Understand Our Own Intelligence

It would be a far stretch for anyone to say they truly understood how the human brain worked. Our knowledge is still very limited, and far be it for us to replicate intelligence without understanding it.

But Peter, I hear you say, you mentioned something about AI using a neural network. Isn’t that a simulation of how the brain works? This is true, in a very light sense. A neural network in the context of AI uses the theory of having interconnected neurons which, when given data, can formulate and infer rules on their own. On a technical level however, a neural network is very much just a clever use of calculus.

I’ll spare you the details, but suffice it to say that a neural network is more or less a multi-variable math function that has been optimized with the use of a vector gradient and back-propagation to spit out certain values when certain types of input are given.

The amazing thing about neural networks is that the programmer does not have to specify any rules to the system, only how to read the input and if an output matches the expected output. This is what “training” a network is. The way I see it, you should fear a neural network becoming a malicious intelligence like you fear the Pythagorean theorem coming to life and haunting you (a recurring nightmare of mine).

We are nowhere near this kind of AI.

What Actually Keeps Me Up at Night

Now that we’ve slain the proverbial beast of the AI apocalypse, I think it’s important for people to understand what actual problems AI poses. And believe me, there are many.

Corporations like Google and Facebook are using AI and machine learning algorithms to gather and analyze every bit of information about you. It may surprise you to know that those messenger apps you have installed on your phone can actually access:

- your contacts

- your microphone

- your location

- your message contents

- your social networks

- your browsing history

- other meta data

If it seems a bit scary, it’s because it is. While I don’t think AI is going to kill us all, that doesn’t mean that it cannot be abused. AI can be used to find the spending habits or interests of a person and tailor marketing and advertisements to them.

It seems benign on the surface, however concerns are starting to be raised about the ethics of using AI in social networks and how this impacts privacy.

What worries me is that not enough people know about the current uses of AI which affect their everyday lives in predatory ways.

We need more awareness about the uses and benefits of AI in society, as well as the possible avenues for abuse. Instead, we have prominent figures irresponsibly using their public platforms to spew information on topics they do not understand. AI is one of the most exciting developments in the world of computation. People should be informed on the future, not fear it. To quote Google’s AI Chief, John Giannandrea,

“Technology should augment the human intellect not replace it. It should be a powerful tool to help us think better, and I think that is really the journey we are on.”

--

--

Peter Caruana
LassondeSchool

Undergraduate student at the Lassonde School of Engineering, York University, pursuing a double major in Computer Science and Physics