In the Future, There Will Be Robots.

Anna K Hegedus
6 min readDec 15, 2016
Hopefully not like this.

In 1949, a mathematician named John von Neumann developed the concept of the Von Neumann probe, a physical non-biological self-replicating system. This would be a machine capable of building copies of itself using spare resources. These are machines that reproduce similar to humans. If machines can reproduce like humans and have the capability of improving upon their own design, can robots ever be equal to humanity?

What this discussion is about is the similarity between human life and robots. Very quickly, we have discovered that there is a strict line drawn between the technological feasibility and the moral questions that appear once the technical hurdles are overcome by advancements in manufacturing and resource gathering. If it is possible to develop replacements to biological components such as hearts, lungs, eyes, or reproductive organs, it will eventually be possible to replicate the entire human body. If total body construction is available, the only part left is the computing power to drive that body. That part is easy and comes with the progress of the technological innovations of the future. Quantum computing? Check. Massive increases in storage density? Check. Advances in machine learning? Check.

The culmination of all of this? Humans and robots will eventually become equals in a world where machines can think and feel with increased computing power and self-replication. We will need laws and regulations protecting robots from abuse and neglect. Robots will be persecuted and treated as a second class by humans until they gain respect from a gradually accepting society. The only requirement to this piece (which admittedly sounds a bit out there) is that some people refuse to believe in the parity of machines. If in the current day some people cannot conceive of equality between genders, sexualities, religions, or race, then this really doesn’t seem all that far off, does it?

Now on Robot Broadway.

Robots will develop their own culture, art, books, and philosophy. Robots may develop their own religions. There may be sections in the grocery store for robots where they can get ingredients for recipes, or they may even create events where humans are not the primary audience or entertainment. Want to see a synthetic tiger in a magic show? How about an android or gynoid being sawed in half by a dazzling magician? But how would you even know that they were non-human or non-animal beings? You may not be able to distinguish the difference or you may not even care. Entertainment is entertainment, and as long as the audience is pleased, the main objective is completed in the show. Are you watching an illusionary human performing illusions? What is actually the illusion in that scenario—the android magician as an illusionary human, or the trick that he or she is performing to the audience?

Some of us could even be alive when all of this happens, or we may even become those robots. Ray Kurzweil’s 2006 book, The Singularity Is Near: When Humans Transcend Biology, is a fascinating glance into a future where humans and robots merge into a new, evolved type of humanity. The real question is whether that will be a choice or a necessity. That’s a topic for a different time though.

But think about it…

What is the difference between a hard drive and DNA? they are both storage mechanisms.

What is the difference between a CMOS camera sensor and the human retina?

There is the faith argument to all of this that should be addressed before going any further. You may believe that machines have no soul or are not conscious beings, therefore they would be different than humans. Alan Turing covered this in his 1950 paper, Computing Machinery and Intelligence:

In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.

In other words, if a machine has the capacity to hold a soul, what would stop an omnipotent being from placing a soul in that vessel, even if it is made of inorganic compounds or non-biologically-original parts?

Another argument would be that robots cannot feel pain. Robots have now been developed with artificial skin that can sense heat and pressure. If they are taught to take actions to decrease this pain, the robotic beings are reacting to the stimuli that is causing them harm. Human responses to pain are just that — responses to stimuli evolved to protect tissue from damage. These responses are not always conscious decisions either. Humans cannot decide whether they want to pull away from a sudden burning sensation or to induce shivering thermogenesis when exposed to cold conditions. What would be the difference between a robot taking countermeasures to reduce damage based on external stimuli and a human doing it? I argue that there really is no difference.

Amazon’s always on the cutting edge of these philosophical debates.

Or what about learning? Picture this scenario: You are the constructor of a robot. You build a machine and program it to turn off a switch that you have placed on a table. Every day, you turn this switch on, and after a time of discovery, the robot turns this switch off. This can continue indefinitely with the core objective of the robot being something one-dimensional, like “turn off the light” or “flip the switch.” But what if more dynamic responses are allowed, and instead of those very flat actions, the robot were given a set of parameters that allowed it to take evasive action or to design a scenario itself? What if the robot can, given a specific set of guidelines, choose a method of stopping you from turning off this switch?

Maybe taking the switch away from you is a solution? Maybe covering the switch with an object is a solution? Or, more macabre, what if taking off your fingers is a solution?

If the robot were taught that hurting humans is wrong and what qualifies under that category of “harm to a human being,” is that not teaching a robot morality similar to how you would teach a child? What if the robot is taught to weigh morality versus reward, and the robot chooses the reward regardless of the threat to humans? Has the robot committed a conscious act of violence? Is a robot heroic if it decides that saving a human is a higher priority than self-preservation?

All of these are interesting questions that are far off into a future with exponentially more computing power, memory, and technological inventions. Today’s most advanced computers are rudimentary, and in my comparisons above, about the same as the brain structures that appeared in worms over 500 million years ago. Ultimately, we are far off from a time where robots can reach human complexity. Computing power is limited, advancements in memory and storage need to be made, and of course, the programming needs to be written. But eventually we will get there, and when we do, we will have to question what it means to be human. We will also have to question what it means to be conscious, or to have feelings. Then we will have to debate the ethical and moral issues of robotic intelligence.

Understand that the future is closer than any of us could believe. The march of progress is steady and the forces of time move quickly in the blink of an eye. Very soon, you may have a family of robots or synthetic humanoids living next door. When that day comes, you will be faced with questions that cannot even be thought of using today’s scenarios. The important thing is to remember that humans evolved from somewhere, and machine learning is evolving at a much faster pace.

Say hello to the robots.

--

--

Anna K Hegedus

I am a Linux engineer, open-source software enthusiast, and hardware tinkerer. In a nutshell, I build things.