Cray-1 — the eight million dollar super-computer
By Frederic Friedel
As a young TV science journalist, I traveled to the Max Planck Institute for Astrophysics in Garching, near Munich, Germany, at the beginning of the 1980s. They had the fastest computer in the world, a Cray-1, and I was there to do a story on it. Recently I was trying to figure out how much progress we have made in the three decades that have passed. Are today’s consumer computers faster, and if yes by what factor? Ten? You are not going to believe what I am going to tell you.
I remember entering Max Planck with my film crew, sitting down and telling some scientists why we had come. Then I asked “where is the computer, the Cray?” — “You are sitting on it,” they replied. Oops, right they were. This is what the Cray-1 looked like:
Of course when you took off the panels of the tower and the structure surrounding it things looked different:
This super-computer had been developed by CDC engineer Seymour Cray, who had found backers on Wall Street for the project. It took four years to build, and in 1975 when the first 80 MHz Cray-1 was announced, interest was so high that a bidding war broke out between the Lawrence Livermore and the Los Alamos National Laboratories. The latter eventually won and took delivery of a trial machine with the serial number 001. The first regular customer was the National Center for Atmospheric Research (NCAR). They payed $8 million for a Cray-1 (serial number 3) and a million for storage disks. Over the years Cray Research sold over eighty Cray-1s, making it one of the most successful supercomputers in history. You will find a lot more information on this at Wiki.
Anyway, I was absolutely thrilled to see the Max Planck super-machine, and my six-minute TV piece on it was positively effusive. A couple of year later I got to see another Cray, this one in the basement of the Bell Laboratories, where my friend Ken Thompson had access to it. It was faster and more advanced than the one in Germany, and we dutifully drooled over it.
Recently — we are talking mid May 2016 — I wrote a chess article on a wonderful endgame involving a “wrong bishop” (which cannot easily support an edge pawn to promote). This endgame can only be solved with specialized chess knowledge, which computers at the time did not have. In spite of this a program called Cray Blitz, running on an advanced Cray, had played the endgame perfectly against a human opponent. At the time this was celebrated as the first instance of practical chess knowledge having been implemented into a computer (the “intelligent method”), but when I looked at the program logs and discussed the position with the Cray team I discovered that Cray Blitz had simply searched deep enough to see the solution — by pure brute force. It seemed to be impossible, but that was how fast the machine was.
Some time later I started using a “wrong bishop” study on famous chess players, including the great post-war World Champion Mikhail Botvinnik, and recording the time it took them to solve it. This again I reported in part two of the above article. I also showed it to Cray Blitz, which in the meantime had become the Computer Chess World Champion. Botvinnik, who was a proponent of the “intelligent method”, was shocked to see the computer solve the position by pure brute force calculation.
While I was writing these historical articles I started to wonder how fast the Cray-1 had actually been — compared to today’s machines. It was expedient that a British friend, John Nunn, had just bought his son Michael a good mid-range graphics card for his 18th birthday. John is a mathematician, computer expert (and a chess grandmaster). He did some calculations for me and wrote back: “The Cray-1 could do 130 Megaflops” [million floating point operations per second]. “The NVidia Graphics card in Michael’s computer can do 2258 Gigaflops. So it is about 17,000 times as powerful by this measure.” John conceded that, of course, the architectures are very different.
17,000 times more powerful?? A card you can hold in the palm of your hand, and which costs around $300? I consulted Ken Thompson on this — he has been work on the forefront of computing for over fifty years.
Ken, who never capitalizes anything, wrote:
the cray had 2–5ns cycle time. (depending on model) in that time, it could get up to 7 arithmetic units executing an instruction. the vector length was 64 and it took a few instructions to start and a few to shut down. some models had up to 8 processors. so, peak rate is about 0.5G (2ns) * 7 * 8 or about 25G. now to get to reality — not everything can be vectored. and when you can vector, usually only a few of the units are used. in fact, most instruction are simply run at the clock rate. some instructions take multiple units (divide, square root). but the cray had huge bandwidth to memory. the i/o was staggering.
You figure this out.