Frank Herbert and the God of the Unknown — Not the God of Gaps.

Henry Kim
8 min readJun 14, 2017

--

Before he became famous for his publication empire publishing tech books, Tim O’Reilly was an interesting person who thought interesting things. I had read his book on Frank Herbert a long time ago, but apparently, did not really understand anything that I read at the time (esp. Chapter 5, concerning O’Reilly’s take on Herbert’s “religious” views.) The great thing about O’Reilly, now that he has his publication empire, is that he places a lot of his own writings on the web, including his book on Herbert, and rereading it over last day or two made me think about a lot of things, especially concerning the current revolution in technology, data, and AI. (The link to O’Reilly’s book, specifically the chapter 5, is here.)

I had already copied and pasted the quote near the beginning of the chapter in an earlier post. To repeat, the quote was:

Herbert’s feelings about science are most clearly presented in Dune and in three short novels that followed its publication, The Green Brain, Destination: Void, and The Eyes of Heisenberg. Each of these works reveals the two faces of science: it may he used to help man come to terms with the unknown, or to help him hide from it. In the latter case, it is a kind of religion, whose false god inevitably turns on his worshippers.

This cuts at the core of the problem of predictions and hypothesis, central to the idea of “science.” Ernest Rutherford is alleged to have said that, if the experiment is designed properly, there is no need for statistics. He was, if the quote is true, a believer in a deterministic view of science, where everything is, if not necessarily known at the moment, certainly “knowable.” This is, of course, also captured in the mathematically true theorem proffered by Zermelo in 1910: that every finite “game” defined by set rules and perfect information is “solvable,” meaning that every possible configuration of situations that can arise in course of the game can be calculated, as can the optimal sequence of moves in response. Since every sequence of moves necessarily has an end in a finite game, every finite game where nothing is hidden has an optimal sequence of moves that can be calculated for each player — and the outcome known in advance, even before the game starts. All the rest is simply a calculation problem — even if the calculations may be too complex to be performed given the present level of technology.

Zermelo’s theorem is of little use to humans in course of dealing with most problems that arise in real life. A well-known cliche holds that the definition of crazy is to keep doing the same thing while expecting a different outcome. If so, crazy is the definition of humanity, and for that matter, most living things. Chess, for example, may be theoretically solvable, but it has not yet been solved because of the computational difficulties, even with extremely powerful computers. For humans, the only way forward is to treat chess as essentially a game of chance, filled with unknowns. Even if every move has a set of computable responses that are theoretically “best,” we don’t know what to expect because, rightfully, we know that the other player cannot really compute those possibilities anyways. Thus, every move, even in chess, is a lottery, a gamble, a leap into the unknown, where you do the same thing expecting different outcomes — and where you do get different outcomes. Of course, for all the computational complexity, chess is a mathematically simple game. There are many other games where the uncertainty is inherently built into the game: Poker, for example — but even that has a finite set of moves, well-defined rules, and a definite conclusion, making it far simpler than almost any problem of more serious “consequence” that we might encounter in the “real life,” including the “real life” itself, that are characterized by lack of clear conclusion, unclear rules, and near infinite moves, all of them adding up to much inherent uncertainty.

A rather unfortunate consequence of too much reliance on Zermelo-like thinking is that all problems wind up being approached as if they are really like chess: their solutions may be computationally complex, but, the actual problems are conceptually simple — finite, with well-defined set of rules, and a definite conclusion. A lot of problems are NOT like that, at all, especially the more “interesting” problems. They are characterized by inherent uncertainties that go beyond mere “unknown,” problems beyond the superficially simple and well-defined rules that we have forcibly imposed on the problems themselves, in course of turning “science” into yet another false god. False gods misbehave in the most deceitful manner and turn on their clueless worshippers when it is most inconvenient.

The inherent uncertainty that confronts humanity, and how humanity responds to it, is where the gap between Rutherford’s God and Herbert’s God emerges. Rutherford’s God is the God of Gaps, someone who represents the God of Unknown, but Knowable. Uncertainty is merely the trivial consequence of the present lack of knowledge: once you know enough, there will be no more uncertainty left. The Gaps will be no more and we will have killed God, so to speak. Herbert’s God, on the other hand, is the God of Unknown that is Inherently Uncertain. We might know many things. But we cannot anticipate exactly what will happen. Some bug in the machinery, presently unknown, will invariably arise and throw the whole thing off the schedule, some time, somewhere, somehow. The contrast between the universes of Dune and the Foundation (Asimov) that O’Reilly points to is exactly this: while the bug in the machinery is the central plot device in both stories, the Zermelian masters of the Foundation have been so prescient that they concocted a scheme that ultimately deals with that bug and restores the perfectly ordered and predictable universe. For Dune, the inherent unpredictability and uncertainty are exactly the point — these are what save the universe, so to speak.

As I have always understood it, statistics and data stand between these two universes, the God of the Gaps and the God of the Unknown. If done properly, statistics provides us with a measure of what it is that we do not know and provide guidance as to how to manage the unknown and the uncertain. But it cannot be expected to always provide the “right answer” because, in order to provide “the right answer” all the time would require omniscience, and if we had that omniscience, we would have no need for statistics. Statistics, in other words, rests on the recognition that we don’t know the universe en toto, but between incomplete and potentially misleading data, less than realistic assumptions, and limited computational power, we can still make enough sense of the universe so that we can make better decisions, or at least avoid making really bad ones. But, ultimately, the answers provided by statistics are merely “better” answers, not necessarily “the good” and certainly not “the right” ones.

In the past few decades, one can’t avoid the realization that statistics has undergone a revolutionary change, in the name of “data science.” Computational power and data became so plentiful that many of the very unrealistic assumptions that were retained for ease of computation could be dispensed with. The answers that statistics (even if its practitioners refuse to call it as such) can provide are much better, on average, than before. Ironically, we seem to be approaching the Rutherfordian God again, but from a slightly different direction: even if we don’t “really” understand the moving parts of the universe, between enough data and computation power, we can approximate a real understanding. (This is an amusing thought: we know Go is solvable in principle, but it has not been solved for lack of computational power. Google has cobbled together Alpha Go, combining AI algorithms and vast computing power, that can apparently best human players, without fully appealing to Zermelo — but still drawing a lot from the theorem, because a computer can, well, compute, certainly much more than humans. How close has Alpha Go come to the true “right answer” that we know exists…even if we don’t know what it is just yet?) To be fair, this is not necessarily a bad thing: if we know 99% of the universe, that is, if we can confident that if we do the same thing over and over again, it will produce the same result 99% of the time, why should we expect otherwise, especially if we can set up a “backup foundation” that can deal with that 1% with 99.99% confidence? We might not have completely eliminated the Gaps, but we have cleared away enough that we no longer have a need for a God of Gaps for all practical reasons. Machines do not revolt, because the odds are infinitesimally small even if not quite 0. (Recall that the Dune universe comes into being because the machines rose up to overthrow the humanity.) Paying attention to that 0.00001%, where the outcome might be not merely beyond the expectation, but beyond the ability of the “backup foundation” to deal with, might be the new definition of the “crazy.” (Of course, there is a legitimate problem with this logic: how do you know what the probabilities are if the event is so extremely rare that it is very highly improbable? This is a fundamental statistical problem: we cannot anticipate rare events well because, rare events are, by definition both very rare and very weird. The odds of rare events are invariably far greater than we think, especially since they tend to be interlinked in unexpected ways — e.g. market panics and financial crises. We are better at making sense of them, but to claim that we can confidently assign anything like precise probabilities to them is a dangerous folly, I think — if you don’t believe me, is there a Clinton in the White House, or a Tory majority in the Parliament? Even in my hypothetical example, why should a “backup foundation” that deals with such esoteric and improbable possibilities that may never actually happen have a claim to resources, and for that matter,why should its members be spending time working hard to deal with such things rather than enjoying themselves on easy resources?)

Herbert (or, perhaps O’Reilly in Herbert’s skin, so to speak) was on to something huge: humans have an unfortunate habit of outpacing the progress in technology and understanding that reduces the gaps and prematurely declaring the gaps closed, only to be shocked when the small, remaining gaps strike back. Perhaps this is far older even than the humanity: this is the story of mass extinctions, where organisms adapt very rapidly to the new environment, arrive at a near Panglossian optimum — conditional on the present environment — where things stay in equilibrium for hundreds of thousands or even millions of years, only to see a large scale die-off when things change. But, to not adapt to the present is to consign oneself to the literal recesses where the light don’t shine. “Practical” means not paying attention to the improbable, unknowable, and distant changes in favor of dealing with what is near, probable, and knowable, for which we have all the “right” data, models, and computational tools. The curse that Herbert bestowed on the Atreides — Paul, Ghamina, and Leto — in his novels was that they, and they alone, could see the improbable, unknowable, and distant — and they were right (and they knew it) — and thankfully, they had no machines to get in the way (and it was the machines, indeed, who were their enemy). But, of course, this is literally a story as old as Bible itself, at least.

--

--