A Brief History of GPU

Today, a GPU is one of the most crucial hardware components of computer architecture. Initially, the purpose of a video card was to take a stream of binary data from the central processor and render images to display. But modern graphics processing units are engaged in the most complex calculations, like big data research, machine learning and AI.

Over several decades, the GPU evolved from a single core, fixed function hardware used for graphics solely, to a set of programmable parallel cores. Let’s have a look at some milestones in the fascinating history of the GPU.

Arcade boards and display adapters (1951 -1995)

As early as 1951, MIT built the Whirlwind, a flight simulator for the Navy. Although it may be considered the first 3D graphics system, the base of today’s GPUs was formed in the mid-70s with so-called video shifters and video address generators. They carried out information from the central processor to the display. Specialized graphics chips were widely used in arcade system boards. In 1976, RCA built the “Pixie” video chip, which was able to output video signal at 62×128 resolution. Graphics hardware of Namco Galaxian arcade system supported RGB color, multi-colored sprites and tilemap backgrounds as early as 1979.

In 1981, IBM started using monochrome and a color display adapter (MDA/CDA) in its PCs. Not a modern GPU yet, it was a particular computer component designed for one purpose: to display video. At first, it was 80 columns by 25 lines of text characters or symbols. ISBX 275 Video Graphics Controller Multimodule Board, released by Intel in 1983, was the next revolutionary device. It was able to display eight colors at a resolution of 256x256 or monochrome at 512x512.

Monochrome Display Adapter (IBM, 1981)

In 1985, three Hong Kong immigrants in Canada formed Array Technology Inc, soon renamed as ATI Technologies. This company would lead the market for years with its Wonder line of graphics boards and chips.

S3 Graphics introduced the S3 86C911, named after the Porsche 911, in 1991. The name was to indicate the performance increase. This card spawned a crowd of imitators: by 1995, all major players in the making of graphics cards had added 2D acceleration support to their chips. Throughout the 1990s, the level of integration of video cards was significantly improved with the additional application programming interfaces (APIs).

Overall, the early 1990s was the time when a lot of graphics hardware companies were found, and then acquired or ousted out of business. Among the winners founded during this time was NVIDIA. By the end of 1997, this company had nearly 25 percent of the graphics market.

3D Revolution (1995–2006)

The history of modern GPUs starts in 1995 with the introduction of the first 3D add-in cards, and later the adoption of the 32-bit operating systems and affordable personal computers. Previously, the industry was focused on 2D and non-PC architecture, and graphics cards were mostly known by alphanumeric names and huge price tags.

3DFx’s Voodoo graphics card, launched in late 1996, took over about 85% of the market. Cards that could only render 2D became obsolete very fast. The Voodoo1 steered clear of 2D graphics entirely; users had to run it together with a separate 2D card. But it still was a godsend for gamers. The next company’s product, Voodoo2 (1998), had three onboard chips and was one of the first video cards ever to support parallel work of two cards within a single computer.

With the progress of manufacturing technology, video, 2D GUI acceleration and 3D functionality were all integrated into one chip. Rendition’s Verite chipsets were among the first to do this well. 3D accelerator cards were not just rasterizers any more.

Finally, the “world’s first GPU” came in 1999! This is how Nvidia promoted its GeForce 256. Nvidia defined the term graphics processing unit as “a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second.”

The rivalry between ATI and Nvidia was the highlight of the early 2000s. Over this time, the two companies went head to head and delivered graphics cards with features that are now commonplace. For example, the capability to perform specular shading, volumetric explosion, waves, refraction, shadow volumes, vertex blending, bump mapping and elevation mapping.

General Purpose GPUs (2006 — present day)

The era of the general purpose GPUs began in 2007. Both Nvidia and ATI (since acquired by AMD) had been packing their graphics cards with ever-more capabilities.

However, the two companies took different tracks to general purpose computing GPU (GPGPU). In 2007, Nvidia released its CUDA development environment, the earliest widely adopted programming model for GPU computing. Two years later, OpenCL became widely supported. This framework allows for the development of code for both GPUs and CPUs with an emphasis on portability. Thus, GPUs became a more generalized computing device.

In 2010, Nvidia collaborated with Audi. They used Tegra GPUs to power the cars’ dashboard and increase navigation and entertainment systems. These advancements in graphics cards in vehicles pushed self-driving technology.

NVIDIA Titan V, 2017

Pascal is the newest generation of graphics cards by Nvidia, released in 2016. Their 16 nm manufacturing process improves upon previous microarchitectures. AMD released Polaris 11 and Polaris 10 GPUs featuring 14 nm process, which resulted in a robust increase performance per watt in the performance per watt. However, the energy consumption of modern GPUs has increased as well.

Today, graphics processing units are not only for graphics. They have found their way into fields as diverse as machine learning, oil exploration, scientific image processing, statistics, linear algebra, 3D reconstruction, medical research and even stock options pricing determination. The GPU technology tends to add even more programmability and parallelism to a core architecture that is ever-evolving towards a general purpose CPU-like core.