On ethics and humanizing intelligent machines

Jim Burrows
Personified Systems
13 min readMay 9, 2016

Back in early March, Jeff Hawkins of Numenta wrote an article for Re/code debunking the idea of “the Singularity” and the notion that AI is humanity’s greatest existential threat. Numenta and Hawkins are working on the bleeding edge of both the scientific understanding of the workings of the human neocortex and the engineering of its algorithms into practical machine intelligence.

He dismisses these worries as being based upon what he sees as three major popular misunderstandings regarding the nature of machine intelligence. Unsurprisingly, given that Hawkins has been thinking about and working on the science and engineering of intelligence for decades, I recommend both his article and the thinking behind it. There is nothing about the nature of machine intelligence and the work that is being done in that field that makes “the Singularity” or the “threat of AI” at all necessary. Yes, we could create systems that were dangerous to us if we tried, but on the one hand we don’t need to, and on the other, there’s no reason that such dangerous systems would need to be intelligent. Still, I think that one of his points deserves further examination.

In The Terminator Is Not Coming. The Future Will Thank Us”, Hawkins lists three major misconceptions — that artificial intelligences:

  • Will be capable of self-replication or might attain the ability to self-replicate.
  • Will be like humans and have human-like desires.
  • Once they are smarter than humans, will lead to an intelligence explosion.

It is the second of these that I want to examine in a bit more detail. He writes:

Intelligent machines will be based on models of the neocortex, not the rest of the brain. It is the flexibility to learn almost anything that we want in an intelligent machine, not the ability to survive and reproduce in the wild. Therefore intelligent machines will not be anything like a human, or any other animal.

Some people might try to build machines with human-like desires and emotions. Whether this is even possible and whether we should allow it are open questions. Today, nobody knows how to build such a machine, and to try would require a huge effort, one that is independent of building intelligent machines. Neocortical-based machine intelligence will come first.

His point, that there is no reason that machine intelligences would want to harm us, or want anything at all, is a valid one, but I think we must be very careful in assuming that it is desirable or safe for machine intelligences to be entirely unlike humans. Specifically, I think that the need for machine ethics is unavoidable.

Applying neocortical algorithms to a problem space may be sufficient to solving problems within that space. Hawkins’ visions of “intelligent machines that directly sense and think about proteins, or tirelessly explore the human genome to discover the foundations of disease” are quite appealing, and can no doubt be implemented safely using machines that are entirely inhuman in their capabilities. However, while such abstract learning machines may be able to understand certain problem domains and be able to solve certain classes of problems, judgement, reliable ethical judgement, requires more.

“Embodied AI” image created with HeroMachine

Wherever we turn, AI is in the news. Search engines, voice and image recognition systems, driverless cars, thermostats and the Internet of Things, the game of Go, robots and drones have all felt the impact of AI and ML. The trend shows no sign of slowing — quite the contrary. Automated systems are being trusted with more and more of our lives and information. They are rapidly turning data about us into information and even knowledge and understanding (see my blog post, “A Matter of Semantics: Data, Information and Knowledge”, for the distinctions I’m drawing here).

Autonomous systems are beginning to drive trucks and cars on our highways. Drones are becoming more and more intelligent, even those carrying weapons. Robots are beginning to aid in the care of the elderly and infirm. Each of these activities involves decision making and matters of life and death. Additionally, making decisions involving confidential information, in addition to those directly affecting human life and well being, must be done in accordance with some form of ethical principles. Powerful AIs must, in the end, exhibit ethical judgment, and judgement requires understanding as well as reasoning.

“Reasoning requires Understanding” from Monica’s Mind

AI researcher Monica Anderson has a very good explanation of the concepts of “reason” and “understanding”, the relationship between them, and their roles in classical 20th century AI and the machine-learning-based AI of the 21st century. In “Monica’s Mind”, she writes:

Reasoning is a conscious, goal-directed and Logic-based step-by-step process that takes seconds to years. In contrast, Understanding is a subconscious, aggregating, Intuition-based and virtually instantaneous recognition of objects, agents, and concepts.

We call this process “Intuition” because that is the word traditionally used for insights that appear from our opaque subconscious without us being able to retrace any reasoning steps to reach that insight. But note that there is nothing mystical about Intuition. It is a straightforward process of recalling past experiences and matching them to the current situation. If you see a chair, your Intuition tells you it’s a chair by matching your sensory input patterns to those of past occasions when you have seen chairs. You don’t have to reason about it.

Our Intuition also allows us to understand relationships, complex patterns, trends, abstractions, contexts, meaning (semantics), and the big picture. With little effort we recognize the image of a chair, the flavor of an apple, the meaning of a sentence, or the face of a friend.

But, one might legitimately ask, is the “Understanding” that Anderson uses as a technical term (thus the capitalization) for describing the product of “Artificial Intuition”, really “understanding” in the sense that we use it in human contexts? Philosophically, it is hard for us to be certain, but there are two lines of reasoning that suggest that it is at least analogous.

The first is to be found in the similarity of the Machine Learning mechanisms used to produce this Understanding to the systems at work in the human brain. This is one of the reasons that Hawkins’ and Numenta’s emphasis on being “biologically constrained” and not just “biologically inspired” is so important. By doing detailed research into the workings of the human brain, and constraining software to act only in ways that are known to resemble the actual workings of the brain, they make it that much more likely that the behavior of artificial systems and human cognition are as close as possible.

The second is based upon the subjective and introspective judgment of skilled individuals whose cognitive skills are being replicated. One of the most interesting aspects of the recent match between the AlphaGo ML system, and world champion Go player, Lee Sedol, was the discussions that occurred between the AlphaGo developers and professional and amateur Go players. Michael Redmond, 9 dan Professional Go player and professional Go commentator, has spent most of his life both perfecting his own skills as a Go player and analyzing and explaining the behavior of professional players. As such, his discussions with both the developers and the amateur Go player with whom he co-hosted the game coverage are of particular interest when evaluating the import of the match.

Michael Redmond explaining AlphaGo and human skills

Two insights stood out for me in that coverage. The first was when one of the developers joined Redmond in the pre-show for one of the games to correct him on the use of the term “database” in reference to AlphaGo’s “memory” of the many millions of games that it was trained against. There are so many possible board positions in Go (on the order of 3³⁶¹, more than the number of atoms in the universe) that no database could store them all and there would be no practical way of indexing or accessing such a database.

As the developer explained the Deep Learning and Reinforcement Learning techniques used to train AlphaGo, Redmond reworded what he understood the developer to be saying. What AlphaGo does, he summarized, is form “opinions” as to what a professional player would do in a quite possibly unique situation that it observes, based on its experience, intuition, and understanding that it acquired from both observing and playing the game. Going back and forth, each using their own technical jargon and common usage, the two basically agreed that the way that AlphaGo played was very similar to how the best human players perform their skill.

The second insight occurred during Game 3 of the match. Redmond tried to explain his professional intuition regarding various board positions this way:

When I was young, I used to think that intuition was sort of like some God-given inspiration that came out of nowhere. But nowadays I’ve come to think that actually it’s a memory or experience that we have, and it’s just that we cannot really define it. It’s something that you could call a half-memory, actually.

It’s something that we know, but we don’t really know why we know it. So we have a feeling about a position, and it’s pretty automatic, but we can’t really define all the reading or the pattern recognition that goes into it, because we don’t have a clear memory, a conscious memory. I think we have a subconscious memory of all the variations that we studied (or played) in decades, that we “sort-of-remember”, actually.
— Michael Redmond (see this video)

While Redmond is not a specialist in cognitive psychology, neuroscience or AI, his introspection on the workings of intuition both matches Anderson’s explanation of “Intuition” above, and the memory, recognition and prediction models inherent in Hawkins’ Hierarchical Temporal Memory model.

We can’t be certain that the path that current Machine Learning and AI research is on will lead to true understanding on the part of artificial systems, but it certainly seems possible. Logical reasoning atop that understanding seems quite doable. Still in order to make decisions that are in keeping with human ethics requires not merely understanding in general, but an understanding of humanity, of people, society, and human needs.

With the advent of driver assist features, and the promise of truly autonomous driverless cars on the horizon, the “Trolley problem” and its variants are becoming a hot topic. How should autonomous cars make tradeoffs in accident avoidance situations? To make that kind of a decision means knowing what humans are, recognizing them, understanding that the are valuable individually as well as collectively, that they act on their own, and not always predictably. Those concepts, in turn, require understanding that there is an objective physical world comprising objects, objects that may be similar but are not the same, objects that move within the world, that causes have effects, and so on and so on.

Another broad area where AI and robots are showing up already is assisting medical practitioners and caring for the elderly and infirm. These jobs bring with them the requirement to adhere to medical ethics. The generally accepted “Four Principles” of medical and bioethics are:

  • The Principle of respect for autonomy.
  • The Principle of nonmaleficence.
  • The Principle of beneficence.
  • Principle of justice.

These are not simple concepts to understand without direct experience and a solid, functioning world model that includes people, their volition, cause and effect, and so forth.

Finally, there is the specter of autonomous military weapons systems. At present, the Department of Defense has a ban on fully autonomous weapons, and requires a “human in the loop”. There are, however, both AI researchers and ordinary citizens who advocate fully autonomous military systems. For example, AI researcher, Selmer Bringsjord wrote an opinion piece in his local paper, the Troy Record, entitled “Only a Technology Triad Can Tame Terror” in which he wrote that in order to combat terrorism:

Our engineers must be given the resources to produce the perfected marriage of a trio: pervasive, all-seeing sensors; automated reasoners; and autonomous, lethal robots. In short, we need small machines that can see and hear in every corner; machines smart enough to understand and reason over the raw data that these sensing machines perceive; and machines able to instantly and infallibly fire autonomously on the strength of what the reasoning implies.

Even if one does not agree with Bringsjord, it is extremely likely that military equipment will become increasingly automated and endowed with artificial intelligence, and as it does, it is likely that the military’s mission will affect the need for decisions to be made ethically. If cars become routinely driverless, and face collision avoidance decisions, it is likely that some such cars will be used militarily or in hostile environments. If that is the case, the need to distinguish categories such as “friend”, “foe”, and “bystander” would seem to influence the reasoning applied to their version of the Trolley Problem. With or without weapons capability, AIs in hostile situations may well need to take into account hostile and malicious intent, to be able to recognize it and to incorporate it into the decision making process.

It is my contention that in order for autonomous systems to understand the world sufficiently well enough to make ethically valid decisions, they must be embodied in such a way that they inhabit the world of humans. This needn’t mean that they be “humanoid”, just that they share certain attributes with us. These would include:

  • A physical body that is mobile and navigates through 3D space.
  • Senses that overlap human communication: sight & hearing.
  • The ability to manipulate objects (probably requiring a sense of touch).
Hawkins’ presentation: “What Is Intelligence, that a Machine Might Have Some?”

Hawkins himself recognizes the need for any intelligent entity to have at least some degree of “embodiment”, although he does not call for the degree of similarity to humans described here. In a talk he gave in March, entitled “What Is Intelligence, that a Machine Might Have Some?” (slides may be found here), he presented the following as his list of the necessary attributes of any intelligent system:

Hawkins’ List of the Functional Components of Intelligence

Some of these are absolute requirements and others call for some form or some number of a certain kind of component. To his way of thinking, intelligence requires some form of embodiment, comprising a number of sensors, built-in behaviors, and motivations, and making use of some form of spatiotemporal memory. Exactly what sort and how many of these are dependent upon the purpose and environment of the intelligence.

“Embodiment” here describes the entity’s presence in a measurable “world”, but not necessarily our physical world of space and time. One could, for instance, imagine an intelligence that inhabits the “world” of high finance, whose electronic senses monitor the stock market, business news and current events; whose built-in behaviors involve buying and selling stocks, bonds, futures and the like; whose motivations centered on profit, and risk management; and whose memories involved the sequences of buying and selling, and the degrees of interaction between various commodities and each other and the events reported in the news. Such an intelligence might be very real and capable, but not very human-like.

Intelligence—he would argue, and I would concur—requires a world for the intelligence to operate in, and the three types of interaction with that world that are laid out in his #2: the ability to sense the world, feedback regarding ones actions in that world, and the ability to affect it. These three items, along with motivations (and emotions) are reflected in the embodiment defined in #4.

Hawkins has said or written in a number of venues that a human-like embodiment is not needed when one is creating an artificial intelligence, unless we are trying to create an artificial human. What I am arguing here is that given how rapidly and widely AI is being deployed, and how quickly systems are becoming more and more autonomous, a form of machine ethics is required, and that for true intelligences of the sort that Hawkins is describing, machine ethics requires an intelligence that is at least somewhat human-like. This raises a few questions.

First, why is machine ethics required? Clearly, it is possible to build an AI system with no analogue to ethics. Why then should we do it? The simplest answer that I have to that, is the following thought experiment: Given that humans lacking in conscience are often called “sociopaths”, when considering the rapidly approaching advent of fully autonomous cars and trucks, when they emerge, do we want autonomous cars to be driven by “artificial sociopaths”? The answer must clearly be “No.”.

The second question is: what about automated and autonomous systems that are not intelligent, which do not have the degree of Understanding needed for rudimentary ethical judgment? The answer would seem to be twofold. For systems that do not have capabilities that make them powerful enough to be dangerous, either in terms of human life and health, or access to highly sensitive information, then there is probably no great need for machine ethics. But, for systems that do have the ability to pose some sort of threat, then it would seem to be up to the designers and implementors to stand in the role of conscience for the system, and to program them with an explicit eye to making their behavior commensurate with ethical principles. This is a topic of real interest to us at Personified Systems, and will be addressed in other articles.

Returning to the emerging intelligent systems, the next obvious question is: what mechanisms would rudimentary machine ethics hinge on? So far, we have listed reasoning that is based upon understanding and a human enough embodiment to enable an understanding of the human world. Clearly more is needed. What? I have a few preliminary thoughts in that area, notions that I hope to elaborate upon in future articles.

For the moment, my intent is merely to show that ethical intelligence is a crucial component of any plan to develop powerful autonomous AIs, and that that, in turn requires an understanding of the human world that can only be based upon being able to be in and a part of that world. If we can agree on that need, the questions of “what?” and “why?”, we can then turn to the question of “how?”

--

--

Jim Burrows
Personified Systems

On the ‘net (the ARPAnet) in ’74. 4 decades career doing hi-tech things I never did before. Researched Machine Ethics. Retired to create novels and comic books.