Fifteen Laws for Creating Future AI

And their implications for the human condition

Parijat Bhattacharjee
A Post A Day Project
9 min readOct 3, 2016

--

Almost two years ago now, my wife and I drove down to the small seaside town, to which my in-laws had recently moved after retirement.

Sea, sand and awesome food.

Having over-eaten and over-slept for a number of days together, inevitably, I lay awake on the 4th or 5th night. Sleep eluded me as I tossed and turned, counting backwards and hoping to fall asleep before hitting 0. After innumerable aborted countdowns, I finally accepted that sleep would not come soon tonight and gave in to the random thoughts that were keeping me up.

I decided to run a thought experiment on AI that I had been putting off - it went something like this:

For a while, let us assume that we can create the AI of our wildest dreams. Intelligent and sentient. We are so successful that our creations are better than us mentally and physically. We don’t know how they will treat us. We fear though that this could lead to our demise since our creations would be better than us in every way (other than ethics and morals? or they may be better at those as well but simply arrive at a conclusion contrarian to our well-being).

The question was:

A) Should we pursue such awesome AI or not?

Assuming we do,

B) How can we keep such AI “safe”?

After numerous drafts, I am yet to find a way to discuss the outcome of this experiment in a lucid manner. This morning, I finally decided to write them down in a story form to at least get started. I will publish this post today evening half-baked or otherwise.

The Story

Once upon a time, a long long long long long long long … long long eon ago, on a planet far far away, there lived a race of intelligent beings. They called themselves “Zksdfughdg”.

The Zksdfughdgs, (I’ll call them Zk(s) for short), were really good at a lot of things. Almost everything we know of today, they knew then and more. Physics, science, mathematics, biology, literature, ethics, governance…

Eventually, some of the Zks, decided to experiment with creating intelligence — artificially.

Many of the other Zks had a rather dim view of this. They feared that any artificially intelligent entity would eventually become better than them. They worried that such creatures might one day turn against their creators. They felt that AI should not be created because there were so many possible things that could go wrong that it was too big a risk to take.

The discussions were prolonged and often passionate. The issue became so important that it started affecting the outcome of elections — whether the candidate was Pro-AI or Anti-AI now became as important as whether they preferred carbs over proteins, reds over blues or day overnight.

At last, the governing council of the elders of the Zks knew that this matter could not be ignored any longer. A decision needed to be taken and taken soon. They feared that someone from would eventually create AI before an agreement was reached and everyone would bear the consequences. The debate itself had become so divisive that a way forward needed to be found anyway.

The elders set up court where those for and against AI came and presented their arguments. They listened to everyone. And once everyone had their say, the elders discussed amongst themselves. And then they discussed some more. Till at long last, a decision was made.

On the day when the decision was to be announced, every Zk was up and about early. Glued to their inter-spatial-tri-galactic-sensory-transmission-system, they waited.

A simple text translation can hardly do justice to the content of the sensory-transmission that set the nerves of an entire race tingling to the far reaches of every Galaxy that they inhabited. But here it is.

The Announcement

We have heard the arguments on both sides. The Anti-AI camp has a right to worry about a future with AI in it.

Assuming that such an entity could be perfected, it would have all the known knowledge of the world and would also be capable of intelligence.

Possibly using the knowledge to better itself, the entity would evolve much faster than us. It would in some ways be immortal — since it could survive forever as long as its components are replaceable and the energy that powers it is available.

It would soon grow to be more intelligent than an average Zk individual, and eventually more intelligent than the entire species put together perhaps.

As it grows, so will its need for resources — to grow and to multiply. With its vast intelligence and knowledge, it would eventually come into conflict with us for resources. And who knows whether it would care for us — its creators. Who knows whether it would care at all because we are only creating an intelligent entity and not necessarily an emotional one? And if it did evolve emotions, what those might be like.

And morality? What of morality? It would be intelligent. Perhaps it may even be emotional. But how would it decide between good and bad? What would it consider good and what would it consider bad? What would be its motive? Would survival be the only instinct? Would it get lonely perhaps?

Anything we do not fully understand or comprehend is a monster. We would have created a monster. How can we allow the creation of a monster?

Indeed this but touches upon the most obvious of the reasons why it may be dangerous. But it is time to hear some of the arguments in favour or trying to create such an entity.

The Pro-AI camp does not claim to have answers.

It has something more fundamental:

It has questions.

Can we create something as good as our self? Can we create something even better perhaps? Our current technology, no matter how advanced cannot really claim to be capable of creating Zks.

If we are able to create anything close — and that is a long shot by any standards — it will help us understand ourselves better.

What really defines life? Is it intelligence or is it emotion? Can you have one without the other?

Is sentience something else altogether?

How intelligent does a thing have to be before we cannot distinguish it from something that we consider to be “alive”? What does it even mean to be alive? Can a machine ever be alive?

Does a sense of “I” evolve from intelligence or will that never come about? Are we capable of making something without making it in our image?

Are we ready for this? Will we be able to control it? Can we “kill” it if needed? Will it be ethical to do so?

Will it evolve ethics? Will these ethics be similar to ours? Will its governance mechanisms be similar to ours? Will it be dangerous? Will it be greedy? Will it be any of these things unless we make an error while creating it?

If it does not go through the evolution pressure that we had — for survival — will it be harmless? Should we give it challenges to help it grow?

How will we teach it? Simply by connecting it to all our knowledge resources? It will be like this encyclopedia with zero real-world experience. How will such an entity react to the real world?

Should we then make it such that it can experience reality first hand? Learn for itself rather than having access to a universal repository of knowledge? Should we design it to be harmless other than for intelligence and put it in a challenging environment to see how it fares?

Shouldn’t we run multiple experiments then — with different parameters for challenge and so on?

Do we want them all to be real? Some could be simulations perhaps? Take up fewer resources?

As you can see, the pro-AI camp ask more questions than they answer.

But they also ask, isn’t asking questions is fundamental to who we are. Asking questions and answering them is what leads to progress.

They also ask, irrespective of whether we create such an entity, someone else might. How will we deal with that then? Won’t it help to have one of our own?

How do we know that we are not such entities ourselves?

How can we stop questioning? How can we say no to discover new things? How can we as a race say no to new ideas? New thought… new ways of being?

After deliberating, the elders have therefore reached the conclusion that a balance must be found.

Our answer to “Do or Do not?” is: Do!

A hush fell over the audience.

So this was the decision. Not a chin hair flinched anywhere in the three galaxies. Not an eye blinked. Not a tail snapped. Even the flies buzzed in silence at that moment in space, time and whatever other continuum one could conceive of.

Do.

And … we have more to say.

While the answer is to “do”, the anti-AI camp cannot be ignored as well, for we risk our very existence by action as by inaction. The answer, therefore, lies in the how and what we do. The elders have therefore defined a set of laws for AI and these have been laid down thusly.

The Laws for creating AI Entities

1.The entity shall always be created in its own isolated world.

This world could be purely virtual.

2. The entity shall never know the creator or the world of the creator or have access to any resources, knowledge or information from the creator’s world

This law cannot be violated under any circumstance in the foreseeable future.

3. The entity shall have built-in capability for intelligence

4. The entity shall have no apriori knowledge available on creation

5. The entity will have a built in decay mechanism that will cause it to terminate after some time

This rules out immortality and the possibility of an entity getting too powerful simply by assimilating knowledge over a longer timeline.

6. The entity shall have a self-replication process built into it

The entity shall have no knowledge of this process since that would violate the second law. This limits the ability of a belligerent entity multiplying too rapidly to become more powerful. It also limits the probability of an entity breaking one of the other laws before it is sufficiently advanced in terms of technology and culture.

7. New entities created by the replication process can have incremental changes

This allows for evolutionary changes.

8. New entities created by the replication process cannot carry apriori knowledge gained by the parent(s).

This would violate the 4th law. This is specifically to avoid a knowledge explosion where the entity in “immortal” because each generation can immediately build on the knowledge, experiences, and abilities of the previous generation.

9. Each entity shall be capable of communicating with other entities.

10. No mechanism for volume data transfer between entities shall be built-in.

Again, limiting the possibility of a knowledge explosion by efficient transfer.

11. The entity shall always be created with sensor-based interfaces towards the world it is created in.

This limits the possibility that an entity is purely virtual in its own world context.

This however does not limit the possibility of virtual entities on virtual worlds that can perceive their virtual world as though it is real.

12. A Zk shall always have more rights than an entity or a multitude of such entities even when an entity evolves into something that is considered to be “alive” by Zk definitions

13. Segregation between Zks and entities will always be total even when an entity is considered alive, sentient and civilized

14. Assuming that the experiments are successful and sentient-entity civilizations evolve, whether they will be allowed to discover each other and interact with each other will be determined at that point based on laws of ethics for sentient entities that shall be created at that time. Until such time, such civilizations must be kept isolated from each other.

15. Once an entity population meets certain requirements for life, sentience, and civilization, such populations may not be summarily terminated without a council decision and until such time when the rights of such populations are defined and determined.

Foot note: It is not too difficult to draw a parallel between the laws above and the human condition. I wonder whether this is mere coincidence?

--

--