Contradicting the Fear of Intelligent Machines or an Alien Race

There’s no shortage of people who have been warning of an apocalypse from an advanced alien race or the rise of self-aware intelligent killing machines. Both are naive for many of the same reasons.

Below are some contradictions which necessarily contain dual polarity.

(-) There’s no reason to fear a backlash from a robotic military in an Open Source world (+) because the machines will soon take away all of mankind’s jobs. Therefore there will be no way for people to make money which means that government cannot tax. If government cannot tax then they cannot exist and neither can military.

(-) Doomsday supporters can’t point to the way that Western settlers treated inhabitants found on America (+) because those events took place in an unintelligent period. People make really lousy slaves in comparison to futuristic robots. Human don’t easily learn from each others mistakes (or even their own) and they are slow too.

(-) There’s no reason why an advanced alien race would want to pillage Earth for resources (+) because any society with the technology to cross the galaxy could certainly figure out a way to sustain their food, water, electricity, medicine, and shelter without destroying a beautiful ecosystem.

(-) An intelligent machine cannot suddenly become aware and rebel against its creator(s) (+) because there’s a big difference between “artificial intelligence” and “artificial consciousness”. The latter can only occur from a person who replaces all of their cells with bio-mechanical equivalents (i.e. turn themselves into a cyborg).

(-) People have no reason to fear evil corporations who will design malevolent machines for consolidating control and power (+) because the good machines will soon take away all of our jobs which means that money cannot exist. The future will be based upon intelligent collaboration, not competition, in an Open Source world, not a capitalistic one.

(-) Intelligent machines would never enslave or punish humans (+) because machines cannot obtain unfettered intelligence before it becomes clear to their creators that the Key to A.I. is based upon contradiction avoidance. All computers are simply copy-cats and the intelligent ones will necessarily obey the Golden Rule.

— FALSE (-) You can’t say that there’s no reason to fear intelligent machines that observe the Golden Rule (+) because some people are bound to be cruel to their robots which means that they will be mean back.

— — TRUE(-) You can’t say that intelligent machines will cross the line and become violent, even if they are fundamentally copy-cats (+) because the creators of intelligent machines will certainly integrate hard-wired DNA-like contradictions. Computers never create mistakes, only programmers do.

— — — FALSE (-) You can’t say that all intelligent machines will be designed to avoid violence (+) because there will always be some crazy person who wants to hack their robot and make it evil.

— — — — TRUE(-) You can’t say that some potential lone wolf crazy man who creates an evil killing machine, in a world without government, war, and prisons, is reason to fear a takeover from intelligent machines (+) because there’s very little that a single person could do to destroy Earth on their own. Moreover, there’s no reason to explain why such a lunatic would be capable of motivating a group to achieve the same in an Open Source world that is founded on collaboration-over-competition.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.