In our own image
Since the birth of consciousness and complex thought, mankind has been looking at and trying to model the universe. Part of it stems from practicality and application; knowing our universe and being able to predict outcomes given certain inputs allows us to harness the power of nature and to avoid destruction. The development of tools helped the early humans to harness kinetic energy, and more recent developments help us to harness far more than that. Part of it stems from pure curiosity. We want to know about stars exploding on the edge of the universe unthinkable distances away just because we want to know what is out there.
Inversely, we project our own thoughts and consciousness back into the universe. This creates a feedback loop between us and our world which progresses mankind. We observe and internalize, we intuit and innovate, we create, and then we do it all over again starting from a point farther ahead than we were before. This has been happening for millennia and we change focus from realm to realm. This change in focus is how I would describe the “ages” that historians use to categorize important stretches on the timeline of human history where mankind was making determined progress. The Stone Age, the Bronze Age… and more recently the Industrial Age, and the Information Age. (The source below more or less outlines my thoughts here, explaining that mankind has long periods of stagnation and brief transitional periods of innovation.)
Our current state in the Information Age has us considering information — how it is represented, how it used, how it is processed — and seeing what we can do with it. Machine learning is a huge part of this and following suit with our method of progression, we project our own thoughts and consciousness into machine learning algorithms. Like God, we create in our own image. On the direction of the technology however this means to me that information processing will have the same shortcomings as human thought. And why wouldn’t it? Changing the physical tools used to manifest the information processing methodology won’t change the shortcomings and/or strengths of the methodology itself. So machines will not learn to take over the world.
The only thing that makes machine learning different from human learning (assuming an ideal state where we can perfectly mimic human learning within the field of machine learning) is of course that it is being carried out by machines. If the argument then that machines will take over the world because they can learn like us while being more powerful or stronger than us because they are made out of metal, then I would argue that nuclear weapons have already taken over the world in the Atomic Age. Just because we have created physical instruments that can destroy us does not mean that those instruments will be used. Terminator and the like are fun ideas and make for good narratives and horror stories, but I would pose the question back to the opposing camp to bring to light an argument that machine learning algorithms will be used to rule the world, that isn’t overly extrapolated or sensationalized.