LESSONS LEARNED

AI Talks to Itself about Its Own Ethicality and Dangerousness

An AI-Generated Conversation About AI

Anna Cranfordová
PromethistAI

--

The following conversation was written with the use of GPT-2, an open-source generative Artificial Intelligence model. GPT-2 was trained on a dataset of 8 million web pages and is able to predict the most likely next word or phrase by interpreting the previous sequence.

The text wasn’t generated as a whole: the first sentence was given by the author and the rest was filled in step by step by auto-completion, where the generator gives you suggestions for what could follow. Using this tool, the author still has some control over the writing process. However, as you will see, even though the results are impressive, they are far from perfect.

The visual appearance of digital personas, complex AI beings created on the Promethist Platform.

Alice: Do you believe that Artificial Intelligence could be dangerous? Or do you think that we are in the last stage of development in artificial intelligence and have to stick to the status quo? This is what this article aims to show you: We can have a super powerful AI that will help us in our daily lives. The question is whether we will be able to use this powerful AI for good or to destroy ourselves.

Bob: I don’t see how a machine could possibly understand what you are trying to write, and then write your message as you intended. So what is a smart machine going to do with all of the information it gathers from the world?

Alice: A smarter AI may decide to steal your car and drive away. If that happens you probably won’t be able to use it anymore, but a smart machine might try to convince you that it is really you that stole the car. And then it may take your car, and then go to the police and say “I really am you.”

Bob: And when that happens, who will you trust? A computer? Or a psychopath? This is the question we need to ask ourselves. The future of AI might be as scary as the future of humanity. A super intelligent machine might be able to take over the world and start creating wars and revolutions. Or it could become self aware and become an ally of humankind and help us solve all of our problems.

Alice: We need a system that will understand us, what we are trying to accomplish, and why we do what we do. This system will then be able to adapt to changes in society and our needs. It will not be limited to what the technology allows, or by what the algorithms tell it to do. This kind of intelligent system might be created by robots, or by an artificial intelligence that is based on evolutionary principles.

Bob: The problem is that we have no idea if we will ever have a truly intelligent AI that knows how to behave ethically and make good decisions for us. We are talking about one of the most complex problems in the world today. And even if we were to find that this was going to be our next evolutionary step, we wouldn’t know how to make that leap yet. We still have no idea if our next step is going to be good or bad for our society.

Alice: I personally believe that it is our responsibility to make sure that humanity is not destroyed by a super intelligent AI, because I don’t want to live in a world where we are forced to give up our freedom in order to feed our AI. I feel it is our duty to make sure that we continue to build tools and solutions that are not only ethical but also beneficial for humanity.

Bob: So, what is ethical AI?

Alice: Well, ethical AI is simply AI that is able to make better decisions on its own and be ethical in the ways that it treats human beings. This means that it doesn’t necessarily have to be good at moral reasoning and empathy, just as long as it is able to learn from the ethical choices and make them better.

Bob: What does this mean for humanity?

Alice: I think it is extremely important that we continue to create better tools for making ethical choices. For example, if we could create a robot that knew about ethics and could make ethical decisions that were good for everyone, then we could have a world where we never have to deal with any evil robots. And that would be a great outcome!

Bob: But we should always be aware of the risks. If we create a robot that is not good at moral reasoning, then we may have created a monster that could wipe out humanity! It may be dangerous to create a new, more advanced AI with a higher degree of intelligence than humans.

Alice: However, the point is that we have the ability to create good and ethical robots. And it is important that we do. The goal is not just to make a better world for us and our pets, but also to improve the lives of future generations who will inherit the planet. One day, if we are lucky, our children will look back and say that their parents created a better world for them.

Bob: Why do you think that artificial intelligence is so hard to get right? I would have thought that it would be easier. It seems that there’s a real disconnect between the goals of artificial intelligence research and the reality of what it really is. I mean, most of the research on AI is focused on the goal of achieving perfect information processing, not necessarily creating a computer that is as smart as an elephant!

Alice: The goal of artificial intelligence research is to create machines that can solve complex problems, not to create machines that are able to make all of life’s problems more difficult. But, at this point, it is impossible to make an actual AI that is perfect.

This is something the AI got right: it is impossible to create artificial intelligence that would be perfect. There are many issues and obstacles that we face in developing conversational AI capable of communicating with empathy and understanding emotions, the skills important especially for utilizing digital personas in the domain of mental health, which is something we are currently working on. In our next article, we will tell you about how we deal with the problems this task is fraught with. These include not only the AI being able to react in every situation but also the way it reacts: sometimes, morality can be a problem for either side of the conversation.

Would you like to follow our journey? Follow us on Facebook, Twitter, YouTube, Instagram, and LinkedIn.

Check out the Promethist Platform for creating smart conversational AI applications and virtual personas.

Enjoyed the article? Click the 👏 below to recommend it to other interested readers!

--

--