My history with chess computers
I grew up playing chess and experienced their increasing capability as technology advanced. With a career in IT and a professional interest in machine learning, I’m hugely impressed with the achievement of AlphaZero. Here’s my take on the evolution of chess playing computers.
When I was a kid, the local primary school used chess playing as a teaching tool. Each child had a choice of extra-curricular activities — learn to play chess or learn to play the guitar. Somehow I realised that a logical board game would be a better fit for me (or maybe I was put off by the idea of carrying a guitar to and from school), so it was chess for me. This started a moderately successful junior chess playing career and also condemned my poor mother to several years of driving me to chess tournaments every weekend (thanks Mum!).
During this time, I encountered chess computers and I guess this started off my involvement with computers and technology generally.
It all started with the “Fidelity Chess Challenger”.
This was a dedicated chess playing device, with the innovative (then) addition of voice audio. I think I originally borrowed one from school, then pestered my parents to buy me my own.
It looks pretty pre-historic by today’s standards and if you’re thinking “that’ll be rubbish”, you’d be right — again, by today’s standards. The machine had 8 levels of play, which basically took increasing lengths of time to compute a move. It wasn’t too long before I could beat the machine on the first 7 levels and level 8 just took a seemingly infinite length of time to compute a move. But it got me going and that’s what counts — I have fond memories of the machine, even though the voice chip went kaput the day after the warranty ran out…
Before long, I’d acquired a Commodore64 and a copy of Colossus Chess.
There were several versions, I had 2.0 and 4.0. This was the age of 8-bit CPUs, which were pretty limited in terms of computing power and I read somewhere that Colossus 2.0 (albeit on a ZX Spectrum) was looking at just 170 positions per second. However, it was a decent adversary and a challenge to play against.
It was around this time that higher-end computer chess programs started to become competitive with the top humans. In 1983, the “Mephisto” system held the World Champion, Anatoly Karpov, to a draw. From memory, this was a game in a simultaneous exhibition, so not the same as holding Karpov to a draw in a 1:1 match. Still, the Mephisto system — which was something a consumer might buy — was clearly playing at a strong level.
The demands of school seemed to then take over and I spent less time playing chess. My interest in computers remainded though, so I did play against Colossus Chess X on the Commodore Amiga (with which I’d replaced the Commodore64) and GNU Chess on the PC (which, somewhat sadly, took the place of my Amiga). Maybe it was the advance of computing technology, or my own reduced attention to playing, but I recall GNU Chess as more than a match for me.
Several years later came the matches of Garry Kasparov against the IBM Deep Blue systems, in 1996 and 1997. Recalling the Mephisto vs Karpov draw in 1983, and the massive computing advances as technology followed Moore’s Law, this 13 year gap highlights the difficulty of a computer matching the World Champion in tournament play. And many people won’t recall that Kasparov won the first match in 1996, just remembering that Deep Blue won in 1997 and so became the first chess computer system to beat a reigning World Champion in tournament play.
Deep Blue was a powerful mainframe system, at that time the 259th most powerful supercomputer in the world and with a considerable element of bespoke circuitry. But before long, this kind of chess playing performance was available on personal computers. For may part, I considered that the game was up for humans — at least in the case of chess. I don’t think I’ve played more than a handful of games since.
Fast forward 19 years to 2016 and Deepmind’s AlphaGo beat Lee Sedol at the game of Go. This was interesting to me because (a) I was thinking a lot about AI and deep learning and (b) Go is a much more complex challenge than chess, having a far larger number of possible positions.
By this time, computer chess playing had evolved to a state where top humans stand zero chance of matching a machine. Relative strengths of chess players are measured using a system called Elo ratings — world champions rate around 2850, but the best computers rate 3400+ (higher values are better, I was playing at somewhere around 2100 at my best, a decent-ish club level). This reflects the significant gap between humans and “superhuman” chess computer engines. Nowadays, these computer “engines” are commonly used for analysis and the play of grandmasters sometimes rated on the basis of how often they pick the move selected by the engine. Note that some of the top engines, such as StockFish, are both free and run on standard PCs (the StockFish 8 rating is based on a standard quad-core processor setup). Reputedly, StockFish evaluates 70 million positions per second — compare that with the 170 positions per second in the days of Colossus Chess!
AlphaGo was a deep learning algorithm specifically tailored to Go. However, it’s successor, AlphaZero, is a general purpose “reinforcement learning” algorithm and — equipped with only the rules of chess and given 4 hours to play against itself — was able to beat StockFish comprehensively in December 2017. That’s a step-change in both chess computers and AI capability. OK, so there are some caveats, in that AlphaZero hardware was state of the art and both systems were given an artificial 1 minute per move (which I think favours the AI, which is evaluating a function rather than searching future positions in order — AlphaZero is reported to be evaluating “only” 80,000 positions per second).
So now we are at a stage where a general purpose learning approach can beat the best bespoke algorithms, which are already far in advance of humans. YouTube is full of analysis of the AlphaZero vs StockFish games, as top human players seek to learn from this latest step forward and it’s interesting to see how AlphaZero simply plays a different type of game — generally favouring momentum and tempo over material.
For me, this is a powerful illustration of how far AI technology has progressed and a hint of things to come, as general purpose learning algorithms become ever more capable and more widely applicable. That’s very exciting, even though a bit of me fondly remembers the days when I could beat the Fidelity Chess Challenger… ;-)