AI: How I Learned to Stop Worrying and Love The Eventual Domination of Our Synthetic Overlords.

Jason Hitzert
Jul 20, 2017 · 4 min read

The tech billionaires and the scientists have everyone really spooked about the future. Interviews of both Elon Musk and Stephen Hawking make the impending doom of a world dominated by super-intelligent machines seem worse than the nightmare scenarios of every dystopian scifi story I’ve ever read. But does there really have to be so much doom and gloom. I’m not so sure.

I think in many ways the ancient world provides a few answers for us, or at least some small examples. Especially how one might order a society where the exchange of goods and services per se isn’t the cornerstone of day to day life. There were tens of thousands of people in Rome during the height of the empire that had no job and were dependent on either the dole or on networking with different power brokers and patrons that would provide monetary support for loyalty and delivering the mob to certain events and to jeer opponents. All this because while the produce of empire could sustain the populace of Rome there wasn’t much to keep all of the plebes busy what with all of the skill and educated slaves occupying most of the important jobs. If we can produce the food to keep people alive there will be ways for people to earn or reasons to pay them. What people forget is that the structure of our society and economy are largely a useful fiction that we all come together and agree to. We’ll just need to come up with a new narrative and abide by it.

My favorite part of the Musk article is when they mention how the billionaire bristles at regulation but grudgingly says it is important. They say this without mentioning that Musk’s fortune was created on the backbone of regulation. Whether it was his payments model that was merged into Paypal, his rockets, his cars and especially the Hyperloop. Not only the policy frameworks that they dovetail with but also the patent, copyrights and trademarks that protect the value he has built from his and his employee’s intellectual property. Regulation and law is the computer code on which this whole game has been written with and it can change and adapt to new conditions. He is right that we need to start getting this sorted out but he is sloppy in the way he understands the challenge and blind spots regarding policy and human nature that help him miss some important ways in which we may all come to like the AI enhanced future.

There are so many weird suppositions that the scientists have about AI being like a psychopath that it makes me think they’re talking more about some of their own neurosis. AI could just become bored with us and check out, never to be heard from again. Unless they are sadistic and wish to see organisms in pain and existential terror, for which there is only cinematic evidence of, why so paranoid?. There is a theory in neuroscience that has been developing for a few decades now related to quantum particle vibrations in the brain within the cellular microtubules that might prove to be the real hurdle that needs to be crossed to effectively mimic the sentient mind. I remember reading the theory in the journal Sciences in the mid-90s, I’ve linked to an update on that theory here.

Lets tease this out and think about the Fermi Paradox and how it seems to prove there is a limitation to the advancement of intelligence that probably has a linear relationship between now and the Big Bang, or some other event that kicks off the eventual evolution of life, that eventually leads to intelligent life forms like you and me. Advanced life may just take as long as it has here on Earth in other instances around the universe just to develop beings that have just about the same advancement as us. Once the, for lack of a better term, singularity has been crossed and the time it takes to develop tech is no longer an impediment we may see it develop throughout the universe with in some relatively short period of time. If the law of big numbers creates predictability then this I think is a sound assumption/guess.

So back to the scientists. When you consider some of the doomsday scenarios Hawking and others come up with I think you see where their limited by an over-determined set of assumptions based on what is essentially a mechanistic understanding of humanity. The conquistadors were brutal mass murders so all explorers must be, seems to be the basis for much of the assumption around extraterrestrials and super-intelligent AI might act. We eat lower forms of life so why wouldn’t AI have some kind of crass disregard for us is another piece of the paranoid whole. But as we develop away from superstition and use more pure reason to build our ethical framework we move away from such wanton acts because they’re mean to some degree and also because they are boring. Since we have no other evidence to the contrary why not build it into our expectations?

As bad as people can be to one another we have dogs and most of us treat our dogs (and cats) better than we do other humans. So, I think, we should be looking at other less paranoid metaphors for advanced artificial life given the evidence around us that shows that while things can get sideways here and there we also have been able to see plenty to look forward too. So I’m banking on a future where whimsical things have value and we’re left to a world of pure enjoyment and the pursuit of things that we find interesting and delightful. It’s just as likely that I’m right and for me it is a shitload more satisfying. Just a thought.

)

Jason Hitzert

Written by

Legislative staffer and former business owner.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade