Why Digital Ethics Matter

Guido Wagner
Experience Matters
Published in
18 min readJun 20, 2018

This blog invites you to explore a future coexistence of people and intelligent machines — and pinpoints what we need to do to make sure AI benefits, rather than harms, humanity. This first part discusses the importance of “digital ethics,” the second blog is about the challenges that we need to address before sustainable digital ethics can be defined and the third blog covers a potential approach for defining sustainable digital ethics.

Artificial intelligence (AI) is both exciting and unsettling. Where technology experts and movie producers are excited by AI’s potential, many others are unsettled by it because it cracks open a door to an unimaginable future. Will this technology propel us into a Terminator-like apocalypse or send us on a Star Trek-like journey?

What changes will AI have on our minds and bodies? What effect will it have on our relationships with each other, our interaction with machines and the state of the environment? How radically different will our children’s lives be when they are adults compared to our lives now? Will our kids scoff at us in 15 years for not technologically enhancing our brains with AI the way many of us shake our heads today at our parents’ ineptitude with computers?

Shaping the future for the best possible outcome is why defining digital ethics is an imperative for us today. In two continuative blogs I will talk about challenges and a potential step-by-step approach.

The unwritten future of AI

Let’s first consider some positive effects of AI: Self-driving cars mean less time driving in traffic; digital assistants relieve us of the need to sift through piles of paper to find one important document; robots can take over heavy physical work; and powerful new tools allow scientists to gain helpful and important insights from large amounts of IoT or marketing data.

Then there is the scarier side: automation means the loss jobs as we know them. In contrast to similar forces on the labor market during the first industrial revolution, this time white-collar workers will be hit as well. For the moment, let’s assume our society can balance that out with new kinds of jobs, working just a few hours a week, or a universal basic income. But this is not the most important reason why the likes of Stephen Hawking and Elon Musk have warned against the incalculable risks for humans that could be caused by AI. Some philosophic and macro-evolutionary reflections reveal that indeed there are more relevant risks: humans should watch out. It looks like it is up to the current generation of technology providers and politicians to ensure a future worth living.

Based on Gordon Moore’s theory about the exponential growth of computational power and Alan Turing’s definitions about how AI can be identified, Raymond Kurzweil formulated some predictions concerning the long-term future of intelligent machines. Kurzweil maintained that a computer will pass the “Turing Test” by 2029 and therefore must be called “intelligent.” A survey conducted by V. Müller and N. Bostrom in 2013 unveiled that 50% of AI experts believe that AI with human-like capabilities will be developed by 2040; 90% of those experts think it will be available by 2075 at the latest.

According to Kurzweil, the real change will happen around 2045, when machines begin to construct themselves without any help from human engineers or software developers. Kurzweil calls this event a “Technological Singularity.” He feels that like what happens when crossing boundaries of physical singularities like black holes, it is in principle not possible to predict what will happen afterwards.

Movie makers have of course speculated about the future. There are “good guy scenarios” like the Star Trek universe or the nice robots in Bicentennial Man — where the machines always follow the humans’ orders. Humans here are (almost) always in control of the interaction with machines. On the other hand, Hollywood also created “bad guy” scenarios in which machines follow their own agenda. Perhaps best known is the scene from 2001: A Space Odyssey, in which the spaceship’s computer, the HAL 9000 (or just “Hal” for short) murders one member of the 2-man crew and attempts to do the same to the remaining astronaut. In Terminator and The Matrix machines are clearly out to do more than save their own skins; instead, they aim to dominate humanity. Other movies, like Transcendent, explore transhumanist goals of overcoming human limitations (like aging, computational skills and memory) through science and technology. And yet, the ability to upload our minds to a machine might not be a future which everybody finds attractive. Kurzweil sees even more steps down this path, for example nanobots which could transform the human body more and more into a transhuman machine. Kurzweil believes, “There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality.”

Can there be other post-singularity scenarios beside machines being the “good guys” or “bad guys”? Everything could become crazy like in the novel “Quality Land” where algorithms control human life, not always with the intended results. But let’s trust in some reasonable evolution and have a look at how things developed during nature’s evolution.

Beyond extremes

From the first moment after the creation of the universe, things increased in complexity. Fundamental particles, atoms, molecules, unicellular organisms, multicellular life and finally self-aware life evolved from one level of complexity to another, as Kurzweil stated. Interestingly, this process seems to continue also in human’s history of innovation. From the first flint stone to the CERN research facility — probably the most complex closed system machine currently on earth — complexity grows. The same goes for emerging structures like the internet or social media, which evolve even faster than any physical machine.

A second macro-evolutionary driver can be called “game changing events.” The sudden extinction of the dinosaurs or the oxygen-crisis 600 million years ago were events that had a major impact on evolution. Other events initiated an ongoing reshaping of the earth: the appearance of life or the rise of self-awareness, for example.

Good questions are: Will there be more “game changers” in the future? Can there be further evolutionary levels? A short answer is: There is no reason why not. And: We need to be aware that there is no guarantee that humans will remain at the forefront of evolution. But maybe humans can offer something to prepare and initiate the next major evolutionary step. Unfortunately, there is no law saying that the result of that step needs to be based on humans, or even on biological life. What we call “artificial intelligence” — a term first used by MIT Professor John McCarthy in 1956 — might require the most complex machinery ever seen, but it could potentially open the door to a new level of evolutionary abstraction — frequently called “superintelligence.” In this context, a question worth thinking about is: “Is there anything ‘artificial’ in what WE call ‘artificial intelligence’ ?“ Why, for instance, don’t we call planes “artificial birds?” Obviously, for some reason, we don’t want to give a potential new intelligence on earth a name — yet.

Elon Musk said humans might just be “the biological boot loader for digital superintelligence.” What can developers and investors learn from such a prospect? Today, the most intelligent algorithms are mainly used to optimize advertisements, financial trade and autonomous weapons. Such areas of operations have one thing in common: The goal is to win. But, to create a peaceful joint future of machines and humans this should not be the first and foremost value that we teach a budding “new intelligence.”

In fact, we need the mindset of a parent or teacher. Ethical principles are needed for people and a “new intelligence” to coexist. To mitigate the risks inherent in this new order, we must proactively agree on these ethics. Elon Musk said in July 2017: “AI is a rare case where we should be proactive in regulation. By the time we are reactive in AI regulation, it is too late.”

The first steps, like the Partnership on AI, are underway. Yet, they must be extended and enforced on a global scale. Furthermore, to create ethics which serve not only humans and intelligent machines but especially their coexistence, we need to have a clear picture of which ethics will be required before the technology is developed. It sounds like an epic challenge, and it is. But the journey already began when we started to let machines make decisions, and evolution does not wait.

Challenges for Sustainable Digital Ethcis

It will be a long and difficult road to define digital ethics that will ensure quality of life for humans, regardless of the role that AI plays in future. It is not only required to handle questions of life and death (like when a self-driving car cannot avoid an accident and must choose which of two people to run over). The need for ethics beyond legislation is already here. There have been cases of children ordering absurd stuff using home assistants. If those “accidental” purchases are extraordinarily expensive, shouldn’t someone or something verify the request with the account owner first? Should AI at an online poker platform tell the user after a series of losses, “Ok, that’s enough!”? Many situations will require digital ethical attention as soon as we have robot companions — especially for the elderly and children. How should AI react in a case when a sick person refuses urgently needed medication?

Nobody knows the degree to which AI will enhance machines with intelligence and self-awareness. If AI can develop its own infrastructure, sustainable digital ethics will be required as the fundamental basis of its community — to avoid a catastrophic outcome and ensure a future worth living… at least for humans. From a holistic point of view, three complementary ethical frameworks must exist in an AI-permeated world: one for human society, one for the coexistence and interaction of AI-driven machines and humans, and one for a potential AI society.

Today, we are far from one global understanding of ethics even just for us humans. Different countries have very different views about the death penalty, euthanasia, gender equality and children’s rights. In an accident at sea, many of us would expect to hear “women and children first!” So, our culture seems to accept that different kinds of lives have different value. Life insurers have their own views on the topic — and lots of calculation models. The more humans agree on a universal ethical framework, the simpler it will be for developers to create intelligent machines to deal with ethical dilemmas.

Developers have already started to create AI focused on specialized tasks that work in a very narrow context. An apparently simple question reveals the problem of such “single-context” AI: Ten birds sit on a fence. You shoot one. How many are left? Obviously, it is simple for an AI to calculate nine. However, there is more than simple math going on here. First, a shot is loud; second, birds fly away when they hear an unexpectedly loud noise. A “multi-context” AI would thus come to a different answer.

If a “single-context” AI has well-defined tasks, it is easy to add task-specific routines that end in a simulation of human-style ethical behavior for certain, foreseeable situations. But just collecting such “island” ethics will not help to create a holistic framework of digital ethics. As we saw in the birds example, aspects from several contexts need to be combined to come to reasonable results.

On the other hand, aiming for a “multi-context” ethical framework would allow us to take out specific topics and assign them to specialized AI. Only with an overarching ethical framework, so to say an “AI conscience,” can any specialized AI come to reasonable decisions when the situational context is larger than that covered by its individual program.

Let’s face it, once we get to the point where machines can out-think us, they might not want us around anymore. As ludicrous as it might seem today, we do urgently need to take steps now to address the potential of our own extinction by the AI Superintelligence some are in the process of creating. That’s why using AI mainly for stock trading, cyberattacks or autonomous weapons is probably not a good starting point for a holistic “AI conscience.” The same goes for the political interest that some leaders have in AI. Vladimir Putin stated in September 2017, “Whoever becomes the leader in this sphere will become the ruler of the world.” Avoiding the mindset of using AI as a tool to gain power is crucial for the future of humanity. With growing intelligence and responsibilities, an AI that is made to rule — and above all win — will not be interested in mutually beneficial collaboration with humans.

An AI conscience must serve as the ethical reference system for a post-singularity Superintelligence or society of intelligent machines — if that becomes reality. Ethics purely for intelligent machines would need additional considerations because “machine society” would be inherently different from human society. For instance, immortality and the ability to instantly create clones would likely influence the value of a machine’s own existence. Social behavior based on family or leisure activities would not be applicable. Inspiration by religion or philosophy would not exist. In fact, any thinking pattern of a new intelligence will be completely different from any human thought. We should never forget that, especially when movies paint a picture of human-like intelligent robots. The “intelligent and self-driving” car KITT from the 1982 TV series Knight Rider surely told the truth when it said: “Companionship does not compute.

To combine all three ethical frameworks — for humans, intelligent machines and human-machine-interaction — and to develop it step by step, a set of congruent basic values will be needed and must be discussed early and frequently. Sustainability, respect for life, and striving for knowledge must be in the forefront. Breaking down the content of such terms into guidelines will not be easy and will challenge people’s willingness to change established points of view. We will need ethical and legal concepts that are more flexible than today’s definitions. One example is the protection of rights of all sentient beings: Self-aware AI would require help from a rule set similar to the “Universal Declaration of Human Rights” — both coexisting and derived from higher level regulations like a “Universal Declaration of Rights of Sentient Beings.”

Another good question is: Will intelligent machines (intentionally) breach such rules? At first glance, this looks like a ridiculous question. But sticking to rules slows down innovation and our human world is not free of contradicting rules. So, what should happen in such cases? Will we need to “punish” AI? What form should that take? Punishment works because it restricts access to things people want and need like freedom of movement, money or food. The final challenge when defining sustainable digital ethics might be to establish a set of “machine needs,” serving as a foundation for existential goals — and as a touchstone for ethical behavior.

Humans quite often express their needs based on emotions and intuition — implying our ethical values. One characteristic of human emotions is that they change over time and thereby support a variety of decisions, which strengthen the ability to innovate. At least for the first generation of AI, features like emotions may not be relevant, but we should not be too sure about claiming “intuition” as a purely human capability. After AlphaZero defeated the former, best chess-playing program, Gary Kasparow remarked that it had used a human-like approach instead of using brute force strategies like other systems before. Demis Hassabis’ commented, “It doesn’t play like a human, and it doesn’t play like a program. It plays in a third, almost alien, way.”

AlphaZero is the first system that shows something comparable to intuition. So, it looks like that capability can’t be reserved for sentient beings only any longer. On the other hand, AI capable of intuition or even emotions would ease interaction with humans and drive further innovation. Giving AI such human “characteristics” can be helpful because it brings humans and AI closer together. It sounds far out, but this could be a foundation for the integration and mutual development of humans and AI.

A step-by-step approach to Digital Ethics

Let’s now look at an approach that should be helpful to integrate Digital Ethics.

Will self-aware, super-intelligent, artificial intelligence (AI) become a reality? Whether it does or not, AI is clearly making more and more decisions. To make sure the results of those decisions are and will continue to be beneficial to humans, it would be wise to define “digital ethics” and to proactively regulate AI capabilities.

“Proactively” in this case means that we need to implant principles of ethics in AI before it is activated. Therefore, we need to anticipate the next steps in AI development and new ethical requirements. Once that is done, regulation must be established either by industry itself or by legislation — which typically takes longer. Achieving compliance to such rules will be challenging for several reasons: Economic pressure often forces corporations to bring new products to market without considering the ethical implications. Various competing industry standards for AI development will probably be established simultaneously — using different implementations of ethical values.

Nevertheless, we need to plan for an incremental approach to digital ethics, proactively based on the technological development of AI. A vast number of small improvements and “go live” steps for AI-related inventions is to be expected, but we can foresee some major developmental steps.

How can we proactively breathe ethics into AI at the different levels of its development? At the beginning, isolated AI digital ethics must consist of ethical rules placed at the very root of the AI’s computation processes, resulting in behavioral patterns in accordance with human ethics, such as security, privacy, legal compliance, and the value of life. A good example of AI in an advisory role would be a tool that analyzes “terms of use” or similar contractual texts. In a first version, it would identify risks based on regulations that do not fit the needs of the user or customer. Here, human decision makers can still overrule the AI proposals. It would be extremely helpful to leverage such cases to train the AI — and to openly discuss reasons for the different assessment, feeding the results back into an iterative definition of digital ethics.

For independent AI, we must have a broader spectrum of decision-making authority, more generic algorithms and value sets, for instance, ethics in human culture, communication, science, etc. The handling of human intercultural challenges and different, maybe contradicting human legislation, needs additional attention.

When AI is supposed to come to decisions based on complex contexts, reasons for departing from ethical rules must be defined as well. Here, the difference between machines just following predefined rules and sentient beings following their conscience or higher goals might become obvious. A radical example would be a situation in which an AI aiming for environmental sustainability concludes that earth must not have more than one billion human inhabitants. Hopefully, that AI would not be able to execute the decision and instead start searching for other solutions because its ethical conscience is real and not only based on a set of rules.

Keeping the “hierarchy of needs” of humans and AI synchronized (or at least ensure they are not contradicting each other) is probably the most relevant task for cooperating and maybe someday self-organizing AI. Of course, humans need food and safety, but they also need social belonging, esteem and self-fulfillment. This will not change. A potential world shaped by AI must still allow people to meet these needs.

We do not know today what a self-organizing, super-intelligent AI will strive for in the end, but it is up to us to introduce from the beginning a basic set of needs suitable for a machine-based existence which neither impedes nor conflicts with the human hierarchy of needs. But, of course, it’s a good question: What motivates an AI? And, as soon as a Superintelligence awakens, humans will no longer be able to change its goals or actions. The codex must be established in the very basic coding of any lower level AI that might develop into a Superintelligence. The awakening of a Superintelligence may happen — because of some recursive self-improvement — in a day or even in hours, and humans might not notice until it is too late.

There might be a moment in time when Superintelligence just happens, or when humans need to decide to allow or even to foster some AI-internal equivalent of human “culture.” The sooner we aim for a generic implementation of basic ethical values based on a complementary hierarchy of needs, the easier it will be to proactively support and steer any technical development.

Outlook for key elements of comprehensive digital ethics

The development of digital ethics is just beginning. It looks like there is an analogy between the expected development of digital ethics and the ethics underlying all legislation within human societies. Examples for ground-breaking legislation in human history are the Ten Commandments of the Christian doctrine, the Ten Commandments in Islam or Buddha’s Ten Paramitas (perfections). Today, we have a much more detailed legal framework, reflecting more complex and intermeshed societies.

On the AI side, today we have the “Three Laws of Robotics”, defined by Isaac Asimov in 1942 as part of one of his science fiction writings. We must expect an increase in complexity within digital-ethics-based rules similar to the enhancements from a few early laws to today’s legislation.

In any case, it would be helpful for humans to harmonize their understanding of ethics and their usefulness. Ethical values cannot only be transformed into legislation to govern the everyday behavior of humans and machines; they form the foundation of our democratic order. If self-organizing AI becomes a reality, society’s structure must be flexible enough to include such new intelligences. The Superintelligence must still allow humans to fulfill their needs, and be smart enough to avoid unintended consequences. Let’s imagine for a moment that we did it; we built a Superintelligence and successfully implanted the rule that human life must be preserved. The Superintelligence could conclude that there are two more ways to fulfill this rule besides the intended cooperative behavior: First, the AI could calculate that it is itself the biggest threat to humanity — and thus destroys itself. Second, it could calculate that humans themselves are the biggest threat to each other — and so puts each person in a self-contained cell.

Keeping this in mind, we can dare to predict some cornerstones of sustainable digital ethics:

  • “Freedom” and “integrity” of (human and AI) individuals are two of the highest values. They include the right to fulfill needs on all levels of the pyramid of needs.
  • Any of the present definitions of “equality” may no longer be applicable. It should be broadened and ready to serve more than one self-aware species. The definition needs more detail to cover more situations (remark: partly, we have that already, for example if we think about “women and children first” in the case of a maritime disaster).
  • “Dignity” is a universal right of any being, independent of its evolutionary history.
  • “Diversity” is helpful to keep social exchange running, and on a higher scale it is an evolutionary driver.

Such principles need broad agreement. Clearly there will be a long period of discussion of many details. Taking a different, broader point of view on our own situation can help people to master personal challenges. Looking beyond the current system of human ethics may help us to prepare for an unknown future, and more clearly see similarities and common goals within and between today’s various human groups on earth. A clear picture of human ethics would ease all discussions about AI ethics. Even if a final, globally-accepted policy of human ethics can’t be expected, humanity might come to some helpful insights as a side effect. Some obvious examples are: It is not necessary for humans to completely claim the right to shape the world. Sustainability considerations may cover more than the lifetime of one individual human. Competition in gathering goods is not sustainable on a larger scale.

Of course, even if ethical principles are defined, any comprehensive regulation of AI development will not be easy to achieve — if it can be done at all. We need industrial self-commitment as well as global legislation comparable to the Geneva Convention or those for genetic research. A first step could be to establish an “AI ethics quality seal.” Humanity should soon agree on the importance of coordinated AI development following the rules of such a seal. One basic aspect could be that companies providing AI-based services need a “Digital Ethics Committee.” All enterprises should add commonly agreed upon rules about digital ethics to their code of business conduct.

At the “Web Summit” in November 2017 in Lisbon, Stephen Hawking proposed: Perhaps we should all stop for a moment and focus our thinking on not only making AI more capable and successful, but maximizing its societal benefit.” Sounds like a good idea. Alignment of ethical values and their implementation is long overdue.

--

--