The anatomy of a computer (Part 2 of 4)

A more realistic, logistical view

Jack Holland
Understanding computer science
6 min readMar 10, 2014

--

This is an ongoing series. Please check out the collection for the rest of the articles.

The previous post presented a very clean, abstract anatomy. This post, we’re going to look at some details that the basic model didn’t cover. Many of the components we’ll be discussing concern physical logistics rather than abstract theory; they concern the details rather than the big picture.

An important component of a computer is the motherboard, which is the circuit board that physically connects all other components:

A modern desktop motherboard

Don’t worry, I don’t expect or want you to memorize every labeled part here. Motherboards handle the gritty details of a computer, meaning that each model of computer has its own custom motherboard. The motherboard of your laptop is much different than the motherboard of your desktop, and if you buy a new desktop, its motherboard will probably differ from the other two’s.

Rather than memorize a ton of acronyms and initialisms, focus on the role that the motherboard plays: it physically links the CPU, memory unit, and all input and output devices together so that they can communicate. Connections between these components are called buses.

PCI Express bus card slots, an industry standard for personal computers

Buses were originally bundles of wires, computers having one bundle to connect the CPU with the memory and others to connect peripheral I/O devices. Like most computer components, bus architecture has undergone rapid change since its inception and modern buses are significantly more advanced than simple bundles of wires. For example, high-performance computing centers often use InfiniBand links, which use a number of sophisticated methods to help transmit about 25 billion bits a second between devices.

Another essential component of a computer is its auxiliary, or secondary, memory. This is a general term that encompasses hard drives, solid-state drives, flash memory drives, optical disc drives, and any other drives that retain their data even when the computer shuts off. This is the key difference between the memory unit (also called main, or primary, memory) and auxiliary memory; if your computer suddenly shuts down, everything in main memory is lost. On the contrary, auxiliary memory is not touched and is thus used to store data that needs to be kept more permanently.

If auxiliary memory can withstand power loss, then why not use it all the time and forget main memory? The answer is speed. Main memory is built so that it can be accessed extremely quickly, on the order of microseconds. The only way to achieve this speed is to design the memory out of components that require constant power. Memory that doesn’t require constant power usually takes orders of magnitude longer to access — on the order of milliseconds. So if we used auxiliary memory for everything, computers would be hundreds to thousands of times slower. Luckily, memory technology is constantly being innovated and some kinds of auxiliary memory — flash memory, for instance — are approaching the access speeds needed to be used as main memory.

While we’re on the topic of speed, I should mention caching. When storing information, there is always a compromise between the amount of information able to be stored and speed with which it can be accessed. If some memory device can store millions of data, then it will take longer to retrieve the data on an equivalent device that can store only thousands of data.

You can see this in the relationship between the CPU, main memory, and auxiliary memory. The CPU can store very little information (bytes) but can access it extremely quickly (on the order of nanoseconds). Main memory can store much more information (megabytes to gigabytes) but can’t access it nearly as fast (microseconds). Auxiliary memory can store an enormous amount of information (gigabytes to terabytes) but takes a long time to access it (milliseconds).

I’m throwing a lot of numbers around, but I hope you can see how this relationship between storage size and access speed works. What might not be as clear is that the situation as described above would make for extremely slow computers. If exchanging data with main memory took microseconds each time, we’d never get anything done. Luckily, there’s a solution to this: caching.

A cache is a smaller version of some memory unit that stores duplicates of that memory. So if some computer’s main memory can store a thousand units of information, its cache might store a hundred. Because the cache is smaller, information can be accessed more quickly. What goes in the cache? Recently used information. This means that if you accessed some data recently, it’s probably still in the cache. Instead of accessing the data from main memory, you can access it from the faster cache, thus speeding up your computations.

Caching is one of the most widespread ideas in hardware and software design because it allows you to cheat: you can store lots of information and, if you’ve recently used it, can access it quickly. This method gets around the usual limitation that storing lots of information makes retrieving any of it time consuming. As a technical note, caching doesn’t necessarily store the most recently used items; it can store the most frequently used items, items of a certain type, or any other metric that helps keep the relevant items in faster memory.

In hardware design, main memory usually has multiple levels of caching, with each level a bit faster but able to hold less information. Auxiliary memory usually incorporates caching as well. In fact, any device whose information takes a while to access likely implements caching of some sort.

One thing CPUs and lava have in common is their proclivity to get really, really hot

On a different note, CPUs get hot. Really, really hot. Not as hot as lava, but hot enough to melt the sensitive electronics in and around them. Thus, CPUs need fans to constantly cool them. We probably won’t talk too much about fans since CPU cooling is more of an engineering issue than a computer science one, but it’s worth pointing out that when a processor computes billions of calculations a second, it generates significant heat. It turns out that even with fans and other cooling methods, heat dissipation is a major concern when designing processors and frequently dictates how compact they can be.

3D models of computer fans, which look pretty much like regular fans

The last technical point I want to make is a big one. Thus far, I’ve talked exclusively about the CPU — the central processing unit. What about the non-central ones? Many computers don’t have any non-central processors; the CPU handles everything. But high powered computing often requires other kinds of processors, most notably a GPU, or graphics processing unit. GPUs specialize in processing large arrays and matrices of data needed to render computer graphics, making them essential for playing many computer games. GPUs can also be used to accelerate many kinds of scientific computations. I’ll leave it at that for now, since going into the actual differences between a CPU and GPU is quite beyond our current scope.

As a final note on other processing units, there are more obscure types, like PPUs — physics processing units — that can be used to perform specific tasks that CPUs aren’t particularly well adapted to, like complex physics. Another example is network processors, which are often used to improve routers (the devices that let you wirelessly connect to the Internet).

In the next post, I’m going to elaborate on how CPUs work to give you a better picture of what hardware actually does.

Image credit: motherboard, bus slots, lava, fans

--

--