What Is General Computing?

Craig Yamato
FermiHDI
Published in
4 min readDec 22, 2022
peachfreepix — freepix.xom

I have often been asked to explain general computing, so I thought it might make a good post. This post is meant to give an extremely high-level conceptual overview. General and shared computing are exceptionally complex, and implementations vary widely. However, their basic concepts are very similar and intertwined.

In short, the concept of general computing is to break every function into its most common components. Then using instructions (programs), you can arrange them to do any task. General computing goes hand in hand with shared computing, which general computing enables. The reason for general computing and, in turn, shared computing is economics, specifically the ability to mass-produce standardized hardware (CPU, Memory, …) and more readily and efficiently use those resources. The basic principle is steeped in Moore’s Law and the belief that hardware’s capabilities are more than the demand from software.

But what is general computing? General computing traces itself back to the mid-1800s with Charles Babbage’s Analytical Engine and Augusta Ada King, Countess of Lovelace, who developed the first “program” that did something other than complex mathematical calculations. In truth, these are little more than historical facts, along with the Greek Calculator and British intelligence’s WWII Colossus.

The true foundation for modern general computing though lies in the creation of the von Neumann architecture stored program processor and operating systems. In simplest terms, a stored program processor has a small set of straightforward instructions like add, subtract, multiply, and divide, along with the ability to read and write to addressable memory locations according to a program controlled by an operating system.

A user loads a program’s instructions to the memory attached to the processor where the first instruction is in memory location 0, the next instruction in memory location 1, and so on. So, for example, an instruction could tell the CPU to collect the value at the memory location 1000 and multiply it with the value it reads from the memory location 1001, storing the result in memory location 1002. Then, the next instruction would tell the computer to do something else with the value now in memory location 1002.

Without getting into registers, aliases, and extensions, a modern x86-based CPU only has about 300 core instructions. I once saw an article that indicated that 98% of all user programs could be mapped to only about 60-ish of those instructions. So as you can imagine building something as complicated as Microsoft Office takes many instructions. So many instructions that no programmer could ever write a complex program at the low level a CPU works at; this is where things like compilers come in. They allow programmers to write programs in more English syntax, which a compiler uses in turn to build the complex instruction set used by the CPU.

I also mentioned Operating Systems as a critical part of general and shared computing. Operating Systems are relatively easy to understand as they only have two main functions. The first is to act as a program scheduler, and the other is to be a standard interface to the hardware for programs.

Two of the commands every stored program processor has is to copy the data from this memory location to this other memory location and jump to the instruction at this location. Conceptually, an Operating system tells the CPU to move five instructions of a program into five execution memory locations. Then add a new command in the following execution memory location to jump back to the OS’s next instruction. The OS then layers programs so that the first five instructions of program A are followed by the first five instructions of program B, followed by the next five instructions of program A, and so forth.

The second thing that an operating system does is to provide an abstraction of the underlying hardware. Every piece of hardware connected to a computer can be different, such as video cards. So the operating systems assign a common memory location and a way to use that location that programs can count on. The operating system then provides the translation of that into where and how the video card installed in the computer wants to receive it. This way, any program can run on any computer that runs that operating system and use any video card without knowing anything about it.

So what about cloud computing? The cloud takes the operating system one more step with a third trusted party running the Operating System. Sometimes, these operating system even abstracts the CPU and Memory, which we call hypervisors.

As you can see, general and shared computing is extremely flexible. This flexibility sits at the heart of everything, Cloud, Composable, Virtualization, and just about anything that starts with Software-defined. But what happens when hardware capabilities are not enough to meet the demand put upon it by software? Maybe a topic for another post.

I hope this gives you some idea about general and shared computing as a generic high-level concept. If this is something that you think I should write more about, please drop me a line.

--

--