The Artificial Hunch

Tjeldflaat
10 min readApr 11, 2017

--

During the summer of 1956, a group of researchers gathered at the top floor of the math department at Dartmouth College. They met to discuss neuron nets, language processing, neural networks, among other things — all aspects of the overarching focus of the workshop: simulating human intelligence. To help with the fundraising for the conference, they needed a catchier title, however. John McCarthy, who organized the workshop decided to call it “artificial intelligence”. This meeting galvanized a new era of AI, fueling the field with newfound optimism and funding. The subsequent twenty years are often referred to as “the golden years of AI”. After this initial burst, however, the interest and optimism in the field declined, marked by a stretch of on-again, off-again academic forays.The most recent of these so-called “AI winters” hit around 1987, putting a chill on the pace of work in the field until the early 1990s. Since then, the field has regained some momentum, seeing a significant uptick in research and applications in the technology industry. 2015 was widely considered a landmark year for AI. Spearheaded by the juggernauts of the Bay Area, AI has been claiming relevance in a plethora of emerging developments — from allowing entertainment platforms like YouTube and Netflix to “know” what you might want to watch next, to equipping vehicles with the ability to navigate traffic autonomously, to instantly correcting my spelling as I type these words here.

Given the vast scope of AI’s potential capabilities and roles, it is not surprising to see that it is starting to supplant human involvement in some fields. That said, it has a long way to go to truly rival human intelligence. It is honestly a stretch to call most of the current AI “intelligent” — as it is largely brute force data analytics and statistics at work. Michelle Zhou, a former senior researcher with IBM, refers to this first phase as recognition intelligence. The limitations of this rudimentary state presents itself most clearly in applications within the creative fields. AI’s role as a creative director — in the case of ad agency, McCann Erickson, or as a music producer, as with Sony’s CSL Lab are surely fascinating developments, but, behind the scenes humans did most of the heavy lifting — from providing the reference data, establishing the training parameters, to editing the final output. With training wheels like these, it is hard to grant the machine the label as a “creative”.

However, an event in Seoul last winter, may have hosted the early embers of an AI engine that goes beyond the mere strings of “If x, then y”. At said event, Lee SeDol was invited to compete in a tournament of Go — the popular board game invented in ancient China. Lee knows a thing or two about this game. He is a grandmaster — and has been at the top of international rankings for much of the last decade. At the press conference before the upcoming five games, he reasonably estimated to beat his opponent 5–0 or 4–1. As the game unfolded however, it was clear that Lee had critically underestimated his opponent. Despite playing some of the finest sets of his career, he was repeatedly defeated — in a way that left him and the audience mesmerized. His opponent was no ordinary Go player, but Google’s AI system, AlphaGo.

The number of possible moves in a game of Go is calculated to be at least 10^(10⁴⁸)

At surface level, this might not seem like such an impressive feat. Machines have already dethroned humans in games like chess and checkers, so why is this any more significant? It may indeed be just another machine-victory in a board game, but upon a closer look at how the game advanced, profound differences present themselves — differences teasing at creative abilities that go well beyond flashy number-crunching. It is only when one understands how strangely complex and nuanced the game of Go is, that the amazement of the triumph can be realized. For context, chess has a branching factor — that is, the available moves for any given turn, of 35. For Go, this number is 250. Mathematically, this means that the number of possible move configuration in the game exceeds the number of atoms in the Universe. Consequently, the previous go-to recipe of brute force data processing is simply not a possibility. Instead, it requires, as the creators of AlphaGo put it “a degree of creativity.”

Creativity is a phenomenon that is notoriously difficult to pin down. It is ultimately the manifestation of something new and useful. This is something we all understand — yet the process remains largely mysterious and opaque. Despite its elusive nature, there are some proven methods to the madness. Consider Stefan Sagmeister — one of the most highly regarded graphic designers of our time. His portfolio of work ranges from record covers for The Rolling Stones and Jay Z, to ad campaigns for HBO and the Guggenheim. Sagmeister is undeniably a creative practitioner. In his work, he relies on a wide repertoire of tools, but there is one that he is especially fond of: a technique referred to as Random Entry Idea Generating Tool. This is a non-linear approach to problem solving — part of the lateral thinking theory, developed by Edward de Bono in the 1960s. This method has Sagmeister establish random reference points to reframe creative challenges. Through these vantage points he can discover new facets and opportunities of a given project. An example of this could be to use a glass as a reference object for a given problem — allowing him to speculate on questions such as “What if my product were to be transparent?”, “What if my product functioned as a container?”, etc. Through these new and otherwise unlikely questions, novel solutions can be found.

A similar approach can be found in Kjetil Thoresen’s architecture practice, Snøhetta — famous for work such as the Norwegian Opera House and the expansion of the San Francisco Museum of Modern Arts. Snøhetta’s website has a dedicated section for one of their favorite tools for innovative thinking: a concept called transpositioning. This is a method in which the “participants are invited to break from their professional role and switch perspectives with others in the group”. This is done to free the participants from their habitual thinking, and force them into other disciplinary perspectives, — from which they can observe a different kind of solution space.

Snøhetta’s SFMOMA Expansion

These examples illustrate three important lessons. First, creativity is not an intrinsic human property — it is a deliberate practice, consequently, it can be taught and learned. Secondly, the creative process usually necessitates a cross-disciplinary approach, and solutions are derived through the synthesis of seemingly unrelated elements. Lastly, the whimsical nature of creative practices requires an underlying framework, a navigation system that can guide the tinkering, and steer the “what if” questions in productive and meaningful directions. This is the ability that arguably still give humans an edge over the machines in creative pursuits — it is also what Gerd Gigerenzer, director at the Max Planck Institute for Human Development, refers to as “the highest form of intelligence”: intuition.

Lee Sedol has intuition in spades. Since he started playing Go professionally, at the age of 13, he has played thousands of matches and gained a colossal amount of experience. This gives him an acute sensibility to the wide range of patterns and strategies of the game, and subconsciously guides his every move. Meanwhile, the DeepMind team is experimenting with a secret sauce to counter this seemingly inimitable quality. In utilizing an area within AI referred to as reinforcement learning, DeepMind’s AI engine has not only been fed data from human matches (supervised learning), but also went a step deeper, and played matches against cloned AlphaGo systems. In other words, it was not merely relying on hard-coded evaluation heuristics by human beings, but rather honing its skills through hundreds of thousands of matches against equally well-trained AI engines. In effect, the years Lee laid down to become a master of the game, was within the AlphaGo reach in a matter of weeks. The number of moves and strategies AlphaGo has exposed itself to — and learned from, is simply beyond the bounds for Lee.

A fascinating demonstration of this took place 37 moves into the second game, when AlphaGo played a move that caught Lee completely off-guard (and provoked him leave the match room to recover.) Remarkably, there was no record of this move ever to have been played before. The underlying mechanics of the move are even more fascinating: The AI engine had indeed encountered the move previously — through its vast experience in simulated practice runs. David Silver, the lead researcher on the AlphaGo project, explained that the program had estimated that there was a one-in-ten-thousand chance a human would play this move. The most perplexing part is not that AlphaGo was able to determine this, but that it was aware of the boldness of the move and still decided to play it. In other words, it overrode its original programming and followed a hunch.

Four stages of Monte-Carlo tree search in AlphaGo, by David Silver & Aja Huang

This is getting to the core of what makes AlphaGo so different: In contrast to most AI game systems before it, AlphaGo is not working off of an all-encompassing database of possible moves. Instead, just like Lee evaluates only a limited number of options for any given move, AlphaGo reduces its search base to a manageable level. It does this by deploying a heuristic algorithm called the “Monte Carlo tree search” — in effect, probing randomly for what “the right” move might be, while relying on underlying cultivated neural networks for guidance in evaluating positions and moves. This is starting to sound a lot like the description above of human creative practices.

Besides conquering the game of Go, the Mountain View giant is hard at work utilizing AI in a range of other ventures. In 2015 alone, it was applying variations of AI to thousands of projects. There are also countless other companies and research institutions across the globe, relentlessly developing and expanding on the capabilities of AI. In aggregate, we can expect a colossal surge in new AI systems over the next decade. These systems will inevitably start meshing and interconnecting in different ways. Consequently, AI will eventually be able to access and cross-reference data across domains — and in bouncing off other networks, bolstering each other’s knowledge base and intellectual capacity. In light of our exploration of creative methods above, this network effect presents profound possibilities for the combinatorial nature of creativity, and will strengthen and widen AI’s basis for intuition.

Ms. Zhou’s refers to the next stage of AI as cognitive intelligence — allowing AI systems to not only recognize data, but interpret and draw inference from it. AlphaGo is clearly teasing at an important development into this territory. But, before we start packing up our gear and resigning, it is important to recognize that although Go is a tremendously complex game — and that a machine’s mastery of it, demonstrates that AI can surpass human prowess in areas once considered to be out of reach — it is still just a game. It is based on a perfect set of information, and ultimately has only two possible outcomes: win or lose. The world outside of Go is endlessly more complex and nuanced than this. AlphaGo is not able to take over the world quite yet.

The intervention of Alpha-Beta pruning, described by Arthur Samuel

In recognition of the historic victory, Google’s DeepMind team was awarded an honorary 9-dan (the highest rank a Go player can have) by the Korea Baduk (Go) Association. Their work with AlphaGo did indeed give a new dimension to our concept of what AI is capable of, but the foundation for their work was laid much earlier. Back at the Dartmouth in 1956, one of the workshop participants was already working relentlessly in solving a very similar challenge. Arthur Samuel had been setting out to build a machine that could beat humans in the game of checkers. Granted, the cumulative complexity of this game is much more manageable than in Go, but Samuel dealt with many of the same challenges — not the least because the available memory and processing power of computers was a fraction of what is available today. Consequently, he had to be creative with the contents of his machine’s toolkit. Samuel’s program was eventually able to challenge a respectable amateur player in the game (and this was as big of a sensation then as the AlphaGo victory is today). An important reason for this achievement was one of the deployed search algorithms: alpha-beta pruning. Just like AlphaGo’s Monte Carlo algorithms relieved the program from having to search exhaustively through the possible moves, Samuel’s checkers program similarly trimmed its search tree by pruning away branches that were deemed ineffectual to the final decision. In other words, the program was guided by an underlying form of intuition, albeit in a less sophisticated way than in the case of AlphaGo.

Neither Silver nor Samuel developed game machines for the mere fun of it. They both fundamentally believed that the sequential decision-making challenges of games like Go and checkers had many counterparts in the real-world. Rather than seeing the work as means to an end, they considered it to be the development of a skill-set that ultimately could become applicable to other, more pressing matters. One of the most potent of these cultivated abilities is certainly artificial intuition. As a “human” bedrock ability, can this set the stage for other similar creative abilities in machines? Artificial tinkering? Machine jamming? Structured spontaneity? Go figure.

--

--