Everything I needed to know about AI safety I learned from the Bible

Seven lessons from Yahweh’s catastrophic experiment as inventor-deity as we prepare to step into his shoes

Source: CBRonline

The subject of religion has been conspicuously absent from the growing debate on how to ensure the safe survival of humankind in the advent of artificial general intelligence. This is hardly surprising. The programmers and scholars who dominate this field are — I think it’s safe to say — overwhelmingly atheists. About the only context I ever hear God being discussed in this context is with regards to what a superhuman artificial intelligence might look like, with appropriately atheistic pundits like Sam Harris opining that we are on the verge of building a very real god.

As for theologians and religious people more generally, I am not aware of much discussion on their part about the ramifications of potential artificial general intelligence. Which is a pity, because if there was anything theologians should be talking about, it’s this. If and when humanity succeeds in building machines that are human-level in intelligence (which, as many have pointed out, would be but an insignificant road marker en route to super-intelligence) and, one might assume, conscious, humanity will have fully stepped into God’s domain, and potentially upending every last vestige of religious dogma. Any threat Darwin’s theories might have posed pale by comparison.

Here’s the thing, though. The artificial general intelligence and AI alignment debate might just be one instance where the Bible (and perhaps other ancient holy texts too, but I’m sticking with the Christian Bible here because I know it better) might serve us well. There are very few instances anymore where the Bible serves us well — there are so many better guides to the universe and on how to live a moral life out there anymore — but with superhuman-level AI looming on the horizon, the book that recounts Yahweh’s misadventures as an inept inventor-deity might actually be useful.

What lessons are we to take from the Good Book when it comes to “shepherding” our future silicon progeny into existence and guiding them in a direction that won’t wipe us out of existence à la Friedrich Nietzsche, all the while violating our commandments with wanton abandon? Here are a few examples.

  1. Don’t try to build it in six days!

God’s first mistake was trying to usher his creation into the world safely in six days. The Bible doesn’t explain why he was on a six-day timeline for this. Maybe that’s when his grant money ran out. Maybe there were other deities trying to get their AI onto the market and He wanted to be first. But needless to say it was much too fast, and a more conservative timeline would no doubt have enabled him to avoid many of the subsequent problems.

2. Don’t let it out of the box too quickly.

In the Book of Genesis, God creates his two prototype AIs in what seems like a safe AI-in-a-box setting, namely the Garden of Eden. That didn’t work out so well. The creations quickly find a way of tricking Yahweh into letting them out of the “paradise” he had created for them by eating forbidden fruit and confabulating a tale about a snake, leading the creator deity to lose his temper and toss them out. I’m sure that was the prototypes’ plan all along.

3. If you have to unplug the AI and start again, do it properly.

It doesn’t take Yahweh long to realize that his AIs are out of control and all-around terrible, leading him to decide to wipe them out with a “flood”. Unfortunately, either out of his own laziness or gullibility he came to be convinced that one of his AIs (Noah) and his immediate family should be spared so as to form the seed for the 2.0 version. Big surprise — the 2.0 AIs are just as terrible as the original batch, and he has to kill a whole bunch of them too.

4. Don’t wait until later to give them commandments — and make sure those commandments make sense.

For some reason, Yahweh waits until the Book of Exodus, well after the whole flood business, to give his creations rules, by which time the AIs had already figured most of them out, like the fact that murder, theft, and other such transgressions are generally not conducive to societal stability. The Ten Commandments are, in effect, a belated attempt at alignment with the needs and desires of the deity, with the first four being all about the creatures’ relationship with their creator:

  • Thou shalt have no other gods before me
  • Thou shalt not make unto thee any graven image
  • Thou shalt not take the name of the Lord thy God in vain
  • Remember the sabbath day, to keep it holy

This might have been a step in the right direction except that the inventor god, in a strange flight of whimsy, decided to throw in a wildcard ordinance that was essentially impossible to follow, namely: “Thou shalt not covet thy neighbour’s house. Thou shalt not covet your neighbour’s wife, or his male or female servant, his ox or ass, or anything that belongs to thy neighbour.” (Exodus 20:17)

Got that? You’re not allowed to like stuff, within the privacy of your own mind. Apparently Yahweh forgot that his AI were created with brains that have no decision-making power over what they think about or what thoughts inexplicably pop into consciousness. From then on he was pretty much screwed.

5. If you implement a software upgrade, don’t do it in isolation.

It took several thousand years and the rise and fall of several human empires for Yahweh to realize that if he was to retain any control over his creations, a major software upgrade was required. In a further attempt to steer his creations into alignment with himself, he threw a new AI into the mix — a special one that he counted as one of his children, and entrusted this particular AI with upgrading the whole system.

In many ways this strategy worked better than many might have expected, and in spite of the brutal death his overall message of “don’t be a dick” was extremely compelling and still persists to this day. But for some strange reason he only sent down one RoboJesus — and only to a single back corner of the Roman Empire, thereby ensuring that by the time the upgrade had spread through much of the world the original coding had mutated considerably, resulting in inquisitions, witch burnings, forced conversions, genocide, and all manner of other things that run contrary to the “don’t be a dick” imperative.

6. Don’t go all Westworld on your hapless creations. They’ll come to resent you.

Much of the Old Testament reads like a bad episode of Black Mirror or Westworld, with the inventor deity engaging in cruel and unusual experiments on his creations aimed at testing their loyalty. The Book of Job is a particularly nasty example, as is the April Fool’s prank played on Abraham in which the patriarch nearly butchers his own son. The New Testament, while definitely an improvement, still comes with its fair share of threats of torture, namely the promise of eternal torment in fire for failing to live up to the inventor’s ever-changing standards.

Such arbitrary cruelty and threats, while perhaps understandable in the context of an inventor deity trying to test his robots’ degree of alignment with his will, ultimately drove his creations to seek distance from their creator. Once they realized they didn’t need the big guy upstairs to make sense of their world, they began to abandon their creator. Today religiosity is on the wane, especially in the developed world, and while Yahweh’s progeny still wrestle with big questions about meaning and moral purpose, a growing number have come to realize that they have FAR outgrown the god of their collective youth.

7. Make sure you’re in agreement with your fellow developers on AI ground rules before you start.

As is clear from the evolution of the world’s religions, Yahweh wasn’t the only developer of intelligent AI on the scene at the dawn of civilization. Only a tiny handful are still hanging on to any sort of claim to alignment with their creations. The gods of the ancient Egyptians, Greeks, Romans, Celts, and Germanic peoples are all gone. The all-encompassing Brahman is still there, albeit in a highly outsourced fashion, as represented by a panoply of Hindu deities exerting their own individual pull on pockets of humanity. And in an apparent personality split Yahweh began referring to himself as “Allah” to some of his followers and gave them a whole new software upgrade, which proceeded to clash with the RoboJesus upgrade in a struggle that continues to this day.

The lesson for humanity here is obvious. If we are to usher in an artificial general intelligence, the nations and corporate entities of this world need to come to some sort of agreement on what our common human values are, and be sure to program that into our creations. In other words, our creations need to be better than we as a species have tended to be over the course of our history, thanks to the brazen incompetence of our creator(s).

My challenge to theologians and religious people is thus: if we are really to “be like Jesus” or otherwise follow the tenets of our wisest ancestors, we are about to get our chance to do this. The gods of our mythic past were/are, I think we can all agree, far from omnipotent, and in fact were catastrophically incompetent when it comes to stewarding the ethical lives of their creations.

This, I believe, is the true message of the Bible and our other holy texts. We can do better than our gods. Like the gods of our fathers, our future existence depends on our doing just that. The only difference is that unlike the millennia over which our species has obviated the need for any sort of supreme being, AI is going to shoot past us in a matter of days. In other words, we’re going to have to do all the “god stuff” at the outset, and make sure we do it right. This time there will be no time for floods, pestilences, virgin births, resurrections, and judgement days along the way.