The History of Evolution of Graphics Cards (GPUs)

Veersen Jadhav
6 min readSep 11, 2023

--

Introduction

Graphics cards, also known as video cards or display adapters, are essential components in modern computing systems, responsible for rendering images, videos, and animations on a computer screen. Over the years, graphics cards have evolved significantly, driven by the increasing demand for high-quality graphics in video games, professional applications, and multimedia content. This article provides an in-depth look at the history and evolution of graphics cards, focusing on the contributions of major players such as Intel, Nvidia, and AMD.

Nvidia AMD Intel

In recent years, GPUs have become a critical component in the field of artificial intelligence (AI), particularly in the areas of machine learning and deep learning. Their massive parallel processing capabilities and high-performance computing power enable researchers and developers to train complex neural networks and process large volumes of data more efficiently than traditional CPUs. As a result, GPUs have accelerated the development and deployment of AI applications across various industries, including healthcare, finance, automotive, and entertainment. Companies like Nvidia, AMD, and Intel continue to innovate and develop GPU architectures specifically designed for AI workloads, further driving advancements in AI research and the adoption of AI technologies in the industry.

The Early Days (1960s-1980s)

The history of graphics cards dates back to the 1960s when computer systems started using specialized hardware for graphics processing. Early graphics systems were primarily used for scientific and engineering applications, and they were often large, expensive, and limited in their capabilities.

In the 1970s, the development of microprocessors and integrated circuits led to the creation of more affordable and compact graphics systems. One of the first commercially available graphics cards was the IBM Monochrome Display Adapter (MDA), introduced in 1981. The MDA was designed for use with the IBM Personal Computer and provided basic text-only display capabilities.

IBM Monochrome Display Adapter (MDA) (Image source: minuszerodegrees)

The introduction of the IBM Color Graphics Adapter (CGA) in 1981 marked a significant milestone in the evolution of graphics cards. The CGA allowed for color display and limited graphics capabilities, making it suitable for early computer games and multimedia applications.

In 1984, IBM introduced the Enhanced Graphics Adapter (EGA), which offered improved color depth and resolution compared to the CGA. The EGA was followed by the Video Graphics Array (VGA) in 1987, which became the industry standard for many years. The VGA provided 256 colors and a resolution of 640x480 pixels, enabling more detailed graphics and richer colors.

IBM Video Graphics Array (VGA) in 1987 (Image source: wikipedia)

The Rise of 3D Graphics (1990s)

The 1990s saw significant advancements in graphics technology, driven by the growing popularity of video games and the increasing demand for realistic 3D graphics. In 1992, S3 Graphics introduced the S3 86C911, one of the first graphics accelerators capable of handling both 2D and 3D graphics. This marked the beginning of the era of dedicated graphics processing units (GPUs).

In 1995, Nvidia was founded and quickly became a major player in the graphics card industry. Their first product, the NV1, was released in 1995 and featured integrated 2D and 3D graphics capabilities, as well as support for Sega Saturn game controllers.

Meanwhile, ATI Technologies (later acquired by AMD) introduced the ATI Rage series in 1995, which offered 2D and 3D acceleration and video decoding capabilities. The ATI Rage series was followed by the Radeon series, which became ATI’s flagship GPU lineup.

In 1999, Nvidia released the GeForce 256, the first GPU to be marketed as a “GPU” rather than a graphics accelerator. The GeForce 256 featured hardware transform and lighting (T&L) capabilities, which offloaded complex 3D calculations from the CPU to the GPU, resulting in significant performance improvements.

Nvidia GeForce 256 (Image source: wikipedia)

The GPU Wars: Nvidia vs. AMD (2000s)

The 2000s were marked by intense competition between Nvidia and AMD (which acquired ATI Technologies in 2006) in the GPU market. Both companies released a series of increasingly powerful and feature-rich GPUs, pushing the boundaries of graphics technology and enabling more realistic and immersive gaming experiences.

In 2001, Nvidia introduced the GeForce 3 series, which featured programmable vertex and pixel shaders, allowing developers to create more complex and realistic graphics effects. The GeForce 3 series was followed by the GeForce 4 series in 2002, which offered improved performance and additional features, such as multiple monitor support.

AMD, on the other hand, released the Radeon 9700 Pro in 2002, which was the first GPU to support DirectX 9, a major update to Microsoft’s graphics API. The Radeon 9700 Pro featured advanced pixel and vertex shaders, as well as support for high dynamic range (HDR) rendering, which allowed for more realistic lighting and color effects.

Throughout the 2000s, Nvidia and AMD continued to release new GPU architectures and technologies, such as:

  • Nvidia’s GeForce 6 series (2004), which introduced the first GPUs to support Shader Model 3.0 and high dynamic range (HDR) rendering.
  • AMD’s Radeon X1000 series (2005), which featured improved performance, Shader Model 3.0 support, and advanced video decoding capabilities.
  • Nvidia’s GeForce 8 series (2006), which introduced the first GPUs to support DirectX 10 and unified shader architecture, allowing for more efficient and flexible graphics processing.
  • AMD’s Radeon HD 2000 series (2007), which introduced the first GPUs to support DirectX 10.1 and featured improved performance and power efficiency.

The Rise of General-Purpose GPU Computing (2010s)

In the 2010s, GPUs began to be used for more than just graphics processing, as researchers and developers started to harness their massive parallel processing capabilities for general-purpose computing tasks. This led to the development of GPU-accelerated applications in fields such as scientific simulation, machine learning, and data analytics.

In 2006, Nvidia introduced CUDA, a parallel computing platform and programming model that allowed developers to write code in C, C++, or Fortran and use CUDA-specific extensions to express parallelism and manage GPU resources. CUDA quickly gained popularity and became a key technology in the field of general-purpose GPU computing.

AMD responded with the introduction of the ATI Stream technology (later renamed AMD APP Acceleration) in 2007, which allowed developers to use GPUs for general-purpose computing tasks using the OpenCL programming model.

In 2012, Nvidia released the Kepler architecture, which featured improved performance, power efficiency, and support for GPU computing features, such as dynamic parallelism and Hyper-Q. The Kepler architecture was followed by the Maxwell (2014), Pascal (2016), and Turing (2018) architectures, each offering significant improvements in performance, power efficiency, and GPU computing capabilities.

AMD, on the other hand, introduced the Graphics Core Next (GCN) architecture in 2011, which featured improved performance, power efficiency, and support for GPU computing features, such as asynchronous compute and hardware-based H.264 video encoding. The GCN architecture was followed by the RDNA (2019) and RDNA 2 (2020) architectures, which offered further improvements in performance and power efficiency.

Latest retail GPUs available in market as of Sept, 2023

  1. Intel ARC A770 GPU
  2. AMD Radeon RX 7900 XTX GPU
  3. Nvidia Geforce RTX 4090 GPU
Intel ARC A770 GPU (Image source: digitaltrends)
AMD Radeon RX 7900 XTX (Image source: yimg)
Nvidia Geforce RTX 4090 (Image source: trustedreviews)

Conclusion

The history of graphics cards is marked by continuous innovation and rapid advancements in technology. From the early days of text-only displays to the latest GPUs capable of rendering photorealistic graphics and accelerating complex computing tasks, graphics cards have come a long way. Today, GPUs are essential components in a wide range of applications, from gaming and multimedia to scientific research and artificial intelligence. As the demand for high-quality graphics and powerful computing capabilities continues to grow, it is likely that graphics cards will continue to evolve and play an increasingly important role in modern computing systems.

--

--

Veersen Jadhav

Unleashing the power of code through insightful and practical software engineering articles. Sharing knowledge and embracing innovation in the tech world.