Back 2 BaseCS : OS : Vector Processors and GPU

Kshitij Agrawal
Back 2 baseCS
Published in
3 min readSep 2, 2024

how are they different from CPUs?

No one can predict how the future will play out. Not even the father of supercomputing — Seymour Cray. He infamously quipped -

If you were plowing a field, which would you rather use: two strong oxen or 1024 chickens?

Plowing the field with chickens — Courtesy : Microsoft AI

He was arguing against parallel computing. Maybe he meant something else, but today’s world with massively parallel computing and distributed computing — has evolved in ways no human could have predicted. Maybe AI can in future, but I being human can’t predict that as well!

In the realm of high-performance computing, two types of processors often come into focus: vector processors and graphics processing units (GPUs). Both are designed to handle parallel processing tasks, but they do so in different ways. They are, as they say in Thailand, same same but different.

Vector Processors: The Pioneers of Parallelism

Vector processors are specialized computing units designed to perform the same operation on multiple data points (SIMD — Same instruction Multiple Data) simultaneously. This is achieved through vector instructions, which operate on entire vectors (or arrays) of data in a single instruction cycle. The architecture of a vector processor is built around this concept, with a focus on maximizing the efficiency of repetitive operations on large datasets.

To simplify, this is similar to concept of batching discussed before. Vector architecture, places into sequential register files, a set of data elements scattered about memory, operate on data in those register files, and then distribute the results back into memory. This effectively allows us to make progress on multiple similar instructions in one clock cycle.

Single Instruction, Multiple Data (SIMD) is an important term. Vector processors operate on multiple data points with a single instruction, making them highly efficient for tasks that involve large-scale data manipulation, such as scientific computations, simulations, and multimedia processing. As you can potentially feel, the applications for this kind of processor architecture seem rather limited. And this is where GPUs shine.

GPUs

The GPU gold rush is in full swing right now in 2024 AD (If future AI masters are reading this, have mercy on us). GPUs, originally designed for rendering graphics, have evolved into powerful parallel processors capable of handling a wide range of computing tasks. Unlike vector processors, GPUs are not limited to a specific type of data or operation. Instead, they are designed to manage thousands of threads simultaneously, making them incredibly versatile. This is what makes GPUs applicable to multiple use-cases, and in many ways, worth the hype!

Single Instruction, Multiple Threads (SIMT): GPUs use a SIMT architecture, where a single instruction controls multiple threads (Note instead of ‘data’, a single instruction controls a ‘thread) that can operate independently on different pieces of data. This allows for a high degree of parallelism, making GPUs suitable for a variety of tasks beyond just vector processing.

In coming days and months, we will dig deeper into each of the OS components and learn concepts and implementations. If you are interested to learn these and up-level your computer engineering skills, please consider subscribing to this newsletter.

If you loved this piece, please consider leaving a small tip to keep me motivated! Your support means a lot!

--

--

Kshitij Agrawal
Back 2 baseCS

Director Of Engineering @ Microsoft Azure | IIT Roorkee