My two major intellectual endeavors of the last year or so keep bringing me to the same apparently simple, but really difficult question. “What is reason or intelligence?”
Professionally, I have been exploring the question of how we can make our increasingly person-like and sophisticated computer systems trustworthy. That leads me into explorations of the trends and directions of AI research, but more fundamentally to the question of what AI — artificial intelligence — is, and the answer to that hinges on the question of what intelligence itself is.
At the same time, I have been moderately active in discussions of theology, especially Deism, which is the theological school that I am most closely associated with. Deism can be best defined as a belief in God that is based upon reason rather than the authority of scripture, second-hand revelations, prophecy, miracles, dogma and clerical authority. Once Deists manage to focus on our own beliefs and the reason for them rather than the beliefs and authorities that we reject, a central question is, “What is reason?”. What is the role of the head? And of the “heart”?
Deists and AI researchers alike depend upon logic, math and science, and those are clearly tied to reason. We, as computer experts, AI researchers, and in fact all of us as computer users, tend to think of reason in terms of basic logic with everything measured in terms of a strict binary truth value, everything being either simply true or false, and thus as cold, mechanical and deterministic. This view isn’t too far from the clockwork notion of the world that dominated a couple of centuries back.
In fact, though, logic is not simply deduction. There is induction, abduction and so on. More than that, reason comprises not only logic and math, but intuition, analogy, pattern matching, trial and error, iterative refinement and many other types of cognition. A key aspect of intelligence, human intelligence, is the art of jumping to valid conclusions from insufficient evidence. Truly brilliant people are the ones who see the answer right away, and can then quickly navigate or create the logic that explains it.
In the Deism group on Facebook, several of us, including myself, find ourselves pointing members to web sites and books on critical thinking. My own favorite is a college level textbook on logic and critical thinking called “Clear and Present Thinking” that I supported on Kickstarter, and which is available free electronically, and on paper for a small fraction of the price of a typical textbook. But one of the first things that one discovers when reasoning about questions like the existence of God is that the answer can be neither proven nor disproven with certainty. Having ruled out unquestioning theism or atheism, one is left with at least three further choices. One can be an agnostic theist or agnostic atheist, depending upon whether one thinks that, on balance, the evidence and argument make one or the other position the more likely to be the truth, or one can choose not to take a position at all, to embrace pure agnosticism.
It’s not my purpose in this posting to argue for one or another of those theological positions, but to point out that there are multiple reasonable choices. This is the case in science as well as theology. In science when more than one possible explanations is consistent with the evidence, we use what is basically an aesthetic principle: “Occam’s Razor”, which tells us that When faced with competing hypotheses, the simplest one, the one with the fewest assumptions, should be preferred. There is no provable principle that the world is simple, and that the simplest explanation is necessarily the correct one. Rather, it is more pleasing to us, and it makes our job easier, if we think of the simplest answer as the best. In fact, the role of elegance, simplicity and beauty in math and science is something of a hot topic of late. (See the works of John Tsilikis, Bruce Schumm and Ian Glynn.)
An article I read recently included the following claim from an AI researcher:
Logic is the ideal choice for encoding machine ethics, argues Luís Moniz Pereira, a computer scientist at the Nova Laboratory for Computer Science and Informatics in Lisbon. “Logic is how we reason and come up with our ethical choices,” he says.
This bothered me because, while they are related, as a philosophy major, I view logic and ethics along with aesthetics as three separate dimensions of value, each with its proper methods and domain. Certainly, logic and critical thinking play a large part in reasoning about ethical dilemmas, but ethics qua ethics must also depend on conscience, empathy, emotion and intuition. This brings us back to the definition of Deism as “based upon reason”. The reason that is the basis for theology must needs include not merely reason defined narrowly as logic and critical thinking, but also a definition of it that includes ethical reasoning, conscience and judgment.
This last term brings us around to recent events in the AI world. Last week, a computer system, AlphaGo, unexpectedly won a match with the world champion Go player, Lee Sedol. It did so, not with algorithms or heuristics created to play the game, but using algorithms that were designed to review millions of games and turn them into the basis for the program to make judgements about the most likely move an experienced player would make and how strong a board is, and which play is most likely to lead to a win. The Go board is too complex to ever allow the play to be analyzed logically.
Instead of reducing the game play to logic, AlphaGo uses neural nets that are trained for a number of purposes. The Policy Network, trained on millions of games, allows it to make judgements or in the terms of a 9 dan professional Go player conversing with one of the developers who created the system, “form opinions” about what an expert player would most likely do in a given circumstance. The Value Network, trained on a similar number of games and upon millions of games playing against itself, allows it to make judgements about how likely each player is to win for a given board position. The Rollout Policy Network is a second policy network, similarly trained, that is used when the game “reads out” a board position, doing a tree search of the most likely plays, evaluating each with the value network to find the strategy most likely to win it the game. Each of these judgments or opinions is made on a basis of something more like experience and intuition than on a logical analysis of the game. Indeed, when comparing his own play to that of AlphaGo, the 9 dan professional found himself trying to explain the nature and working of intuition as it is used by humans.
What is reason? Certainly, in part, it is based upon logic and the rules of critical thinking, and it strives to perfect itself by avoiding known fallacies and biases, just as we stress to newcomers in the Deism group. However, just as clearly, those fallacies and biases are aspects of reasoning, perhaps flaws of reasoning, but a part of reason. Also, just as clearly, reason is based upon opinions, judgements, experience, and intuitions, aspects that we are now beginning to incorporate in artificial intelligences.
“We are enjoying intellectus when we ‘just see’ a self-evident truth; we are exercising ratio when we proceed step by step to prove a truth which is not self-evident.” — C. S. Lewis, The Discarded Image
Two hundred years ago, there was a theological backlash against the “intellectualism” of Deism, and the similar theological trends that marked the “rational religion” movements of the Enlightenment. The backlash brought us the Transcendentalism of Ralph Waldo Emerson, Henry David Thoreau, Margaret Fuller, and Louisa May Alcott, among many others. Their thinking focused not on the objective empiricism that was the mark of the Enlightenment and modern science, but more on subjective intuition. It seems to me that it would be totally unreasonable not to classify the Transcendentalists as thinkers, even great thinkers, or to deny their intelligence. Rather, in my opinion, they were making the judgement that reasoning involves not only simple logic, but also extends to intuition and conscience. Similarly my last two sentences, filled with words and phrases such as “seems to me”, “opinion”, and “judgement” reflect reason and the exercise of intelligence.
None of this is entirely new, of course. My friend, Earl Wajenberg, pointed out to me recently that the medievals recognized two faculties within the “rational soul”: intellectus and ratio. “We are enjoying intellectus when we ‘just see’ a self-evident truth; we are exercising ratio when we proceed step by step to prove a truth which is not self-evident,” wrote C. S. Lewis, in The Discarded Image. The Enlightenment thinkers focused on ratio. The Transcendentalists focused on intellectus. But it takes both to have a rational soul, or the image of such a soul in an AI, or a philosophy that is both sane and fruitful.
So, whether we are discussing Deist theology or artificial intelligence and the future of machine learning, we need to understand that “intelligence” and “reason” are terms that cover a wide range of cognitive abilities, that the cold, simple logic that we so often associate with these terms is only one element, and that intuition, judgement, experience, and opinions are also crucial. This, in turn, just raises more questions.
Recognizing intuition and the other “softer” faculties as vital aspects of reason and intelligence causes us to ask what, exactly, intuition is, how we can create it artificially, and how we make it reliable. When jumping to a conclusion based upon insufficient data works, it is a mark of intuitive intelligence. When it doesn’t, it is generally considered to be an example of one of several classes of inductive fallacy.
Perhaps one of the benefits of the search for artificial intelligence is that it will help us to more fully understand native intelligence. If we can create artificial intuition, it may help us to understand our own intuition and when to trust it, and when not. Certainly, as I am studying Machine Ethics and how to make “personified systems”—the increasingly autonomous systems that we interact with more like people than tools—behave in a trustworthy manner, I am finding that it clarifies for me questions of both human cognition and human ethics.