Comp Stomp

Artificial Intelligence versus Human Intuition in Chess, Go, Starcraft, and Dota 2

Hudson Duan
New Game +

--

By yaboyhud

The game type was Melee. The game lobby had just me and my Korean friend, Joo-Young, along with six computers set to Random. We were going through a rite of passage for all young players in Starcraft. The two of us were on a team and up against six (the maximum) AI players at the same time. The computers always tried to rush you at the same time; we learned that all we had to do was fend off the first wave of aggression and our macro strategy would win late-game. It was quite formulaic once we got a few losses under our belt. Twenty minutes later, Joo and I were attack moving from AI base to AI base with a mass of Carriers and Battlecrusiers. A comp stomp.

Games hold a ubiquitous place in human history as a means of enjoyment, education, and competition. Ancient Egyptians believed that successful board game players were deterministically under the protection of their major gods. The Aztecs played Ōllamaliztli, a ball game, as a means to resolve conflicts and as a proxy to open warfare. Even animals play with each other as a way of friendly and harmless competition. Games are usually played between two or more humans, often representing some larger problem or contest. However, recent advances in technology have opened up a new door for exploration. A new challenger approaches! Artificial Intelligence.

The modern computer has come a long way from simply managing voltage states and some arithmetic circuits. We are now in a time where a large majority of society functions on some sort of AI daily. So the question is, when will computers be smart enough to beat us at our own game? If we’re talking about Chess, a common game in Western cultures that is archetypically used as a symbol for intelligence and strategy, it already happened way back in 1996. IBM created Deep Blue as the sun was setting on mainframe computers and shocked the world when Deep Blue defeated Garry Kasparov in a regulated match of chess.

For an AI opponent to succeed in a game like chess, the programming team must balance a graph of moves with optimal choices aligned with the end-goal of the game. For example, a computer playing chess can evaluate and compare several different move trees by assigning points to certain pieces depending on their position and type, making a decision accordingly. It is quite easy to see that this problem becomes complicated almost immediately, starting from the opening moves. However, as computing has advanced, the best computer chess programs of today are widely regarded as stronger than human opponents.

The same is not true for Go however, until very recently. The ancient Chinese game of Go has long been considered one of the hardest games to master due to its overwhelming complexity drawn from very simple rules. Like chess, it is properly defined as zero-sum, perfect-information, partisan, and deterministic. Unlike chess, Go has over 10^761 different possible games (more than the number of observable atoms in the universe) compared to the 10^120 possible games for chess…making it magnitudes upon magnitudes harder computationally. But humans play Go around the world everyday, and have done so for thousands of years. When it no longer makes sense to comprehend calculation, players say that a higher power, intuition, is at play. It was once thought that the sheer amount of human intuition required to play Go would make AI beating a human impossible. But by applying the common misconception of Moore’s law, it was really only a matter of time.

Google’s AlphaGo program recently played a 9 dan player named Lee Se-dol and in a landmark event, beat him three matches in a row, four out of five overall. Today’s IBM of software and the internet is now solving the next generation of problems with next generation methods. AlphaGo’s heuristics differ from previous attempts at game-state evaluation in that they are not hard-coded by developers but are instead learned via neural networks and Monte Carlo methods. AlphaGo’s decision-making is so complex that even the developer team has no idea what its next move will be.

Almost 10 years ahead of schedule, an AI opponent beating a human at Go is becoming an afterthought. It really only took 280 GPUs and 1,920 CPUs. With this benchmark, the next boundaries to push with more and more advanced AI are in video games. Video games, especially ones that are played online with multiple human players, are still a complete unknown in terms of AI challenges. Modern RTS’s such as Starcraft and Age of Empires rely on severe handicaps for the computer in the form of extra resources and game engine manipulation to compete on the same level as humans. It seems extra fitting that these games are impossible to play without a computer. They require the capability of rendering multiple frames per second. No Player Two is going to wait around for you to animate a Marine on paper like Tic-Tac-Toe.

So how is it that the 10 year-old me could take down 7 other computers at the same time given the same start? What is it that humans have that computers are still trying to figure out? The answer is still the same as when scholars were having this conversation about Go back in the 90’s. The machines don’t have that latent and nebulous art of patten matching known as intuition. As defined by Swiss mystic Carl Jung, intuition is perception through unconscious experience, when something simply makes sense for no other reason than it does. Humans draw intuition from things we see, hear touch, from stories we relate to, or from people we love/hate. Intuition is common sense, the collective unconscious, and the personal well all at the same time. If intuition were easy to define, we would just simply write it as a series of x86 instructions and we’d be done with this already.

Go and Chess, although having many permutations of moves, is still a fair, turn-based, perfect information game. Given computational power and time, all the possible decisions are laid out in front of you. Video games utilize uncertainty such as Fog of War, units with varying capabilities for different factions and are played real time. There is no tree to traverse here, and the branching factor is no longer discrete. Instead you are presented with a continuous set of choices over the course of a 30–45 minute game. Without being told what to do, a computer does nothing. A human follows his/her gut.

The first steps towards a competitive video game AI are already in the works, and at Berkeley nonetheless, a location famous for similar breakthroughs. Some background: the eSports (hate this term, why can’t we just call it gaming?) industry has been growing rapidly in the past year, with prize pools dwarfing that of the majors in golf, and viewership in the millions across the world. The industry is married to modern concepts such as live streaming and crowdfunding. The game that started it all was Starcraft: Brood War back in 1998. It was the first computer game to be played competitively because it balanced the minutiae of unit control with sweeping strategy, was easy to learn but hard to master, and the battles were exciting to watch. The Koreans had dedicated television stations for the game a decade ago, and the very best players became celebrities. For our purposes though, we are interested because the entire game was eventually opened via API in 2009.

A team at Berkeley used the Brood War API to create a bot that single-mindedly rode mass Mutalisks to victory. Every single time. By abstracting away unit combinations and perfecting micro, the bot(called the Overmind) is competitive with humans but the AI itself is not very extendable to any of the other playable races, nor can it achieve victory in any other possible way. AI is strong at micro and resource collection(in theory), but the problem is adaptation. How do you collect resources when the enemy blocks your vespene gas? What happens to your micro when a player starts cheesing you with harass? A computer has no idea how to react correctly in a real game of Starcraft where anything could happen and stay on the path to victory.

Still, it is promising that there may be a general AI that can play Starcraft one day. There are some similarities between Brood War and chess/Go, such as having an early, mid and end game. There are quantifiable values for each unit and it is possible to establish an understanding between threat and position through seeing/remembering where the aforementioned units are. Today, players are still competing in Brood War for thousands of dollars and the admiration of Korean girls and probably won’t be too happy if they are bested by an AI. But the one-on-one nature of the game, and its similarities to existing “solved” games makes the Overmind loom ominously. It may not be next year, but it will happen. The true AI challenge lies in another game, one that has its roots in RTS, but is now considered a completely different genre, Dota 2.

Compared to the enormous APM required to play Starcraft, Dota would be considered tame. If you want your adrenaline pumping reflexes on all the time, a game like Counter-Strike would probably be a better choice. Instead, the game of Dota is the definition of intuitive. Having abstracted away resource and base management, the only interface you control is a hero with four skills. Together with teammates, the objective is simply to destroy the opposing base, with the other team trying to do the same thing. Seemingly as simple as capture-the-flag, Dota has one of the largest learning curves of any game, so steep that the majority of people slide off immediately. Any player new to Dota can be spotted instantly even though he/she only has to control one unit for the entire game.

Dota is a game built on incremental small advantages that manifest themselves at a later time with inevitability similar to chess/go. Small nuances such as denying, side pulling, creep stacking, and even just simple positioning can decide a lane, which can decide a later fight, which will ultimately mean the game. This overwhelming complexity is one of the reasons why Dota has the largest prize pool of any video game, and one of the most elitist communities, and it is still growing. There are currently over 5,909,102,214,621,606 combinations of hero matchups, and each hero has skills that can be used together with the rest of the team, in different lanes of varying strategic importance, and of relative strength, when supplemented by the 100+ items.

The developers of Dota have started programming AI to help new players on their feet as the community is notoriously unfriendly to newbies in the form of gamer rage. But Dota AI, in its present state, is constantly laughably bad. Computers cannot figure out the correct pathing and will get stuck on the map. They use their skills whenever they see an enemy like children and waste their potential. On the other hand, Dota AI is almost perfect mechanically when reaction times and precise damage calculation are required. Bots are able to play the few simple heroes that min-max those areas quite effectively, but even so, they look like empty shells compared to when pros are at the controls.

Often people speak about intuition in layers. For Dota, a discrete understanding of the capabilities for the ten heroes in game can be abstracted to the heroes’ relative strength throughout the course of a game. This leads to rapid and continuous game-state evaluation which affects your own hero’s position on the map, and your utilization of available resources. Eventually, Dota is becomes so well defined in a player’s mind that there begins a “metagame”, or game within a game, where players try to gain advantages by understanding the ebb of flow of buffs and nerfs to their favorite heroes/items/map.

As an avid Dota player for the past decade, AI playing Dota would seem to undermine the community that I have seen grow from underground forums to million dollar tournaments, but instead I look forward to it. To build an AI for Dota would be a remarkable achievement and one that would require a large human effort, especially from Dota players not unlike myself. In addition, as we have seen from AlphaGo’s unexpected moves against Lee Se-dol, artificial intelligence approaches games with a sort of tabula rasa. Perhaps AI would be able to demonstrate moves and tactics in Dota that we as humans may have never seen before and, more importantly, never have even dreamed of otherwise. And that would be truly beautiful.

--

--