Full Series: Society’s Greatest Invention: The Mechanics of Computer Science

By Shreyas K. 7th Grade

Shreyas Kambhampati
28 min readApr 24, 2020

Introduction:

Computers are today’s big thing. There are millions of computers everywhere that we depend on every day. Whether it’s to go to work, watch a movie, the possibilities are endless! Computers today are vital to our society. If all of the computers in our world were to turn off or malfunction for just a minute or two, chaos would more than likely take over and spread international panic.

The society that we’ve built off computers is dependent on computers. For example, I can guarantee that you are reading this on some type of computing device, whether if it’s a smartphone, laptop, tablet, etc. In this blog, I will be going over the basics of Computer Science (No, this will not teach you how to program at all. Not yet at least…) This blog will be going over the history, basics, parts of computing, and a lot more!

Part One: Early Computing

Computing may have been different a few decades/centuries ago, but the need for computing has practically stayed the same throughout. The earliest recognized computing device was the ABACUS, invented in 2500 B.C.E. in Mesopotamia.

The ABACUS was a hand-operated that accurately helped add and subtract many numbers. Although today, calculators and computers are a lot more popular and more efficient to use, the ABACUS remains in great use throughout the world.

Over the next 4000 years, humans developed all different types of computing devices, such as the Astrolabe, which help find the latitude for ships out at sea, and the Slide Rule, which helped with multiplication and division.

Before the 20th century, most people experienced computing through pre-computed tables assemble by the “human computers” (Men and women who were tasked with calculating problems were nicknamed “human computers”.)

Militaries were among the first to apply the computational skills to address their complex problems. For example, one problem the militaries faced was accurately firing artillery shells. In response to this, range tables were created, which allowed gunners to look at environmental conditions and the distance they wanted to fire. The table would then tell the gunner what angle to set the artillery cannons at. This proved to be a lot more accurate than before.

By the end of the 19th century, computers were mostly used for special-purpose tasks but had a few exceptions. By the end of the 20th century, computing had become more accessible to the masses and eventually evolved into what we have today.

Nintendo Entertainment System (1985) to Nintendo Switch (2017)
Apple II Computer (1977) to MacBook Pro (2019)

Part Two: Electronic Computing

One of the largest electro-mechanical computers built in the 20th century was the Harvard Mark I was one of the most efficient computers during the first half of the century. The Mark I was built by IBM which was, and still is, a huge player in the computer business. The Mark I was completed in 1944 and used for the Allies in World War II.

(Now, all of this might sound pretty random but trust me, this will be important moving forward.)

The Harvard Mark I carried 765,000 components, 3,000,000 connections, and 500 miles (804.672 km) of wire. The Harvard Mark I was capable of doing addition, subtraction, multiplication, division, complex problems, etc. The computer was also a general-purpose computer.

Harvard Mark I

Fun Fact: One of the earliest uses for this technology was the Manhattan Project.

The brains of these electro-mechanical computers (such as the Harvard Mark I) are known as relays. Relays are electrically controlled mechanical switches.

In a relay, there is a control wire that determines whether a circuit is opened or closed. The control wire connects to a coil of wire inside the relay. When the current flows through the coil, an electromagnetic field is created, which in turn, attracts a metal arm inside the relay that snaps it shut and completes the circuit. After this, the controlled circuit will be able to connect with other circuits.

Even though this might sound efficient, it’s otherwise. The mechanical arm inside of a relay has mass, and therefore cannot move as fast instantly between the opened or closed states.

For example, a good relay from the 1940s might have been capable of flicking back and forth around forty to fifty times. This might seem pretty fast, but it’s another story. It’s simply not fast enough to be useful for solving large and/or complex problems.

The Harvard Mark I could solve three addition/subtraction problems every second. Multiplication took six seconds and division took fifteen seconds. Any other complex problem, good luck with that, because those took more than a minute.

In addition to slow switching speeds, another major issue was wear & tear. Anything mechanical that moves will most definitely wear over a while, whether it’s a short period or a long period. Some might break entirely while others might become unreliable.

The Harvard Mark I had roughly 3500 relays, so many, many problems were faced while the computer was in operation.

Unsurprisingly, that’s not all of the problems the engineers behind these behemoths were challenged with. The huge machines were also infamous for attracting insects. In September of 1947, operators on the Harvard Mark II (no not the Mark I) pulled out a dead moth from one of the malfunctioning relays. From then on, whenever there was a malfunction occurred in the computer, it was simply referred to as “bug”. That’s where the term “Computer Bug” came from.

At this point, it was clear that a faster, more reliable, and more efficient alternative to the electromechanical relays was much needed for all of the computing to advance to the next level. Luckily, that much-needed alternative did already exist.

In 1904, an English physicist known as John Ambrose Fleming developed a new electrical component called a thermionic valve, which housed two electrodes in an airtight glass bulb. This was the first vacuum tube. One of the electrodes could be heated, which would cause it to emit electrons. This process was ultimately known as Thermionic Fission. The other electrode could then attract these electrons to create a flow, but only if it was positively charged. In the situation that it had a negative or neutral charge, the electrons would no longer be attracted to each other.

Part Two: Electronic Computing

One of the largest electro-mechanical computers built in the 20th century was the Harvard Mark I was one of the most efficient computers during the first half of the century. The Mark I was built by IBM which was, and still is, a huge player in the computer business. The Mark I was completed in 1944 and used for the Allies in World War II.

(Now, all of this might sound pretty random but trust me, this will be important moving forward.)

The Harvard Mark I carried 765,000 components, 3,000,000 connections, and 500 miles (804.672 km) of wire. The Harvard Mark I was capable of doing addition, subtraction, multiplication, division, complex problems, etc. The computer was also a general-purpose computer.

Harvard Mark I

Fun Fact: One of the earliest uses for this technology was the Manhattan Project.

The brains of these electro-mechanical computers (such as the Harvard Mark I) are known as relays. Relays are electrically controlled mechanical switches.

In a relay, there is a control wire that determines whether a circuit is opened or closed. The control wire connects to a coil of wire inside the relay. When the current flows through the coil, an electromagnetic field is created, which in turn, attracts a metal arm inside the relay that snaps it shut and completes the circuit. After this, the controlled circuit will be able to connect with other circuits.

Even though this might sound efficient, it’s otherwise. The mechanical arm inside of a relay has mass, and therefore cannot move as fast instantly between the opened or closed states.

For example, a good relay from the 1940s might have been capable of flicking back and forth around forty to fifty times. This might seem pretty fast, but it’s another story. It’s simply not fast enough to be useful for solving large and/or complex problems.

The Harvard Mark I could solve three addition/subtraction problems every second. Multiplication took six seconds and division took fifteen seconds. Any other complex problem, good luck with that, because those took more than a minute.

In addition to slow switching speeds, another major issue was wear & tear. Anything mechanical that moves will most definitely wear over a while, whether it’s a short period or a long period. Some might break entirely while others might become unreliable.

The Harvard Mark I had roughly 3500 relays, so many, many problems were faced while the computer was in operation.

Unsurprisingly, that’s not all of the problems the engineers behind these behemoths were challenged with. The huge machines were also infamous for attracting insects. In September of 1947, operators on the Harvard Mark II (no not the Mark I) pulled out a dead moth from one of the malfunctioning relays. From then on, whenever there was a malfunction occurred in the computer, it was simply referred to as “bug”. That’s where the term “Computer Bug” came from.

At this point, it was clear that a faster, more reliable, and more efficient alternative to the electromechanical relays was much needed for all of the computing to advance to the next level. Luckily, that much-needed alternative did already exist.

In 1904, an English physicist known as John Ambrose Fleming developed a new electrical component called a thermionic valve, which housed two electrodes in an airtight glass bulb. This was the first vacuum tube. One of the electrodes could be heated, which would cause it to emit electrons. This process was ultimately known as Thermionic Fission. The other electrode could then attract these electrons to create a flow, but only if it was positively charged. In the situation that it had a negative or neutral charge, the electrons would no longer be attracted to each other.

Vacuum Tube

By applying a positive charge to the control electrode, it allows the flow of electrons as before (original design), but if the control electrode was given a negative charge, the flow would be prevented. By changing the control wire, one could open or close the circuit.

The idea is similar to a relay but importantly, vacuum tubes have no moving parts. In the 20th century, this was a recipe for success. As I stated earlier, the more mechanical moving parts an object has, the more it wears and tears over time. More importantly, they could switch back and forth thousands of times per second. For comparison, the relays could switch back and forth around forty to fifty times per second, while the thermionic valve could switch back and forth thousands of times.

This triode vacuums (another name for the thermionic valve) eventually became the basis for radios and long-distance telephones, just to name a few.

Later on in the 20th century, to reduce cost and size, as well as improving reliability and speed, a new and more efficient electronic would be needed. In 1947, Bell Laboratory scientists John Bardeen, Walter Brattain, and William Shockley invented the transistor and with it, a whole new era of computing had begun.

The physics behind transistors is pretty complex. No joke! It relies on quantum mechanics, and you know… I’m just a 7th grader… so we are just going over the basics.

A transistor is a switch that can be opened or closed by applying electrical power via control wire. Transistors usually have two electrodes separated by a material that can conduct electricity, or at other times resist. (Semiconductor).

The first transistor shows a huge amount of promise. It could switch between on and off over 10,000 times per second. That’s enormous for tech in the 20th century. Also, transistors, unlike relays and vacuum tubes, were made out of a solid material.

Transistors could also be made smaller than the smallest possible vacuum tubes. This led to way cheaper computers and technology.

Part Three: Boolean Logic & Logic Gates

In the last part, we talked about the evolution from large electro-mechanical computers that used relays and vacuum tubes, to the update and cost-efficient transistor computers, that can turn the flow of electricity on and off multiple times. Even with just two states of electricity from computers, we are still able to accurately represent information. Those representations are call binary, which means “two states”.

At first, you might think that two states of electricity are nowhere near enough to work with, and you would be right, believe it or not. Two states are exactly what you need to represent true and false. On a computer, an “on” state (when electricity is flowing) would be represented with true, and an “off” state (when electricity isn’t flowing) would be represented with false. We can also write binary as 0s and 1s, instead of the traditional true and false.

Binary 0s & 1s

Now, it’s possible to use transistors for more than just turning electrical currents on and off.

Some early computer used ternary. Ternary is, not one, not two, but instead three states! But if you thought that was big, there was also quinary which had five freaking states! But… there was a problem. The more states there are, the harder it is to keep them all separate. Let’s say your smartphone battery (if you have one) is low on charge or there is electrical noise from a microwave, the signals can get mixed up. Mixed up like noodles. And this problem only gets worse when transistors are changing states millions of times per second.

So, placing two signals as far as possible, using just “on and off” gives us the clearest signal to reduce these issues.

Another reason why computers use binary is that an entire category of mathematics already existed that dealt with true and false purposes. It had already figured out all of the necessary rules and operations for handling them. This was called Boolean Algebra.

A man named George Boole (from which Boolean got its name) was a self-taught English mathematician in the 1800s. He was interested in logical statements that went beyond Aristotle’s approach to logic, which was kind of, sort of grounded by philosophy. Boole’s approach to logic allowed the truth to be formally and systematically proven, through logic equations.

In regular Algebra, the values of variables are numbers, and operations are things like addition and multiplication. But in Boolean Algebra, the values of variables are true and false, and the operations are logical. There are three fundamental operations in Boolean Algebra: a NOT, an AND, and an OR operation. These operations turn out to be useful (more than me.), so we are going to take a look at them individually.

This table does not show all of the operations.

A “NOT” takes a single boolean value (true or false) and negates it. It flips true to false, and false to true. The “AND” operation takes two inputs but still has only one output. In this case, the output is only true if both of the inputs are true. The last boolean operation is “OR”. This is where only one input has to be true to make the output true. The other input(s) can be false.

Another boolean operation (not fundamental) in computation is called an “Exclusive OR” or “XOR” for short. “XOR” is like a regular “OR” but there is one main difference. If both inputs are true, the “XOR” is false. The only time an “XOR” is true is if one input is true and the other is false. This way, the “XOR” is true. The “XOR” is really useful (again, more than me.) so we’ll get more in-depth about it in the latter part of the series.

Part Four: Representing with Binary

In the previous part, I discussed how a single binary value can be able to represent a number(s). Instead of true and false, we can call these two states 0s and 1s, which is pretty useful, no joke. If we want to represent larger things, we just have to add more binary digits. This works exactly how decimals also function. With decimal numbers, there are “only” ten possible values that a single digit can be (0–9). To get numbers larger than nine, we simply add more digits to the front.

Each binary digit is called a “bit”. When adding binary numbers, we use 8-bit numbers, with the lowest number being 0, which requires all 8-bits to be set to zero, and the highest being 255, which requires all 8-bits to be set to one. This leaves us with 256 different values. You might have heard of 8-bit computers, audio, graphics, etc. Remember those video game consoles from the 80s, I’m talking the NES, SEGA Master System, even the GameBoy. Those all used 8-bit graphics to play games like Super Mario Bros., Duck Hunt, Tetris, The Legend of Zelda, etc.

GameBoy (Left) NES (Top Right) SEGA Master System (Bottom Right)

The NES, GameBoy, Master System all might have been different, but all operated in 8-bit chunks. 256 different values aren’t enough to work with, so it meant that 8-bit games (Super Mario Bros., Tetris, The Legend of Zelda, etc) were limited to only 256 different colors for their graphics.

Super Mario Bros. (1985) & New Super Mario Bros. U Deluxe (2019)

Ok, enough about retro gaming, let’s get back on track. 8-bits is common in computing, so it has its special word: byte. A byte is 8-bits so if you have ten bytes, you have 80-bits. You have probably heard kilobytes, megabytes, gigabytes, terabytes, and so on. These designations indicate different scales of data. A gigabyte is 8e+9, a megabyte is 8e+6, a terabyte is 8e+12, and a kilobyte is 8000.

Back to some retro gaming here. You’ve probably heard of 32-bits or 64-bits (Think Nintendo 64, Playstation, or even Sega Saturn). What this means is that those devices operate in 32 or 64-bit chunks. But, most modern devices also use 64-bit processors. This list includes the Nintendo Switch, PlayStation 4, Xbox One, the list goes on. Possibly, the device you are reading this on maybe a device with either 32-bit or 64-bits. Now, this may sound weird, but our device runs better and faster because of the other processers and another tech inside of it. The whole thing doesn’t depend on the 32/64-bit processor.

Anyway, the entire thing doesn’t The largest number you can express with 32-bits is 4.3 billion, which is 32 1s in binary. This is why our photos (taken on a smartphone) are so smooth and pretty. The photos you see are composed of millions of colors since our computers use 32-bit graphics.

But there’s a twist. The other side of numbers. Well, not everything’s a positive number, just like many college students’ bank accounts. So we need a way to represent negative numbers. Most computers nowadays use the first bit for the negative: 1 for negative and 0 for positive, and use the remaining 31 bits for the number itself. That gives us a range of approximately plus or minus two billion. While this is a pretty big range of numbers, it not enough for many tasks. For example, there are seven billion people on earth and the United States national debt is around 23 trillion dollars in debt. (Because of coronavirus, that number is going to rise, and another thing, who the heck do we owe that money too!)

This is why 64-bit numbers are useful for tasks like previously stated. The largest number a 64-bit number can represent is around 9.2 quintillions! I know, it’s a lot. Hopefully, the national debt stays under.

Importantly, as we will discuss in a later part, computers must identify locations in their memory. This is known as addresses which store and retrieve values. As computers have grown to gigabytes and terabytes, it was necessary to have 64-bit addresses as well. In addition to positive and negative numbers, computers must handle numbers that are not whole numbers. You know them, you either hate or love them, give it up for decimals and fractions! This includes 1.98, 1 6/7, 6.98, etc.

These numbers are called floating-point numbers since the decimal point can float around in the number. Multiple different methods have been created to represent floating-point numbers. The most common is the IEEE 754 standard. This standard store decimal values sort of like scientific notation. For example, a number like 625.9 can be written as 0.6259 * ¹⁰³. There are two important numbers here: 0.6259 is called the significand, and 3 is the exponent.

In a 32-bit floating number, the first bit is used for the sign of the number (positive or negative). The next 8-bits are used for storing the exponent and the remaining 23-bits are used for the significand.

Ok, we’ve talked a lot about numbers, but your name is probably composed of letters. (wait, why did I say probably?!?). Anyway, it’s extremely useful for computers to represent text. However, rather than having a special form of storage for letters, computers simply use numbers to represent letters. Enter, ASCII, the American Standard Code for Information Interchange. Invented in 1963, ASCII was a 7-bit code enough to store 128 different values. With this, it could encode capital letters, lowercase letters, digits 0–9, and symbols like @, #, ! etc. But, this design had many problems. For example, it was only designed for the English language.

Unicode. This was the format to rule across computing. Developed in 1992, it was a universal encoding system that did away with all of the faulty systems of the past.

Part Five: How Computers Calculate

Representing and storing numbers is an important function of a computer, but the real goal is computation. or manipulating numbers in a structured and purposeful way, like adding two numbers together (1+1). These types of operations are handled by a computer’s Arithmetic and Logic Unit, a.k.a. the ALU. The ALU is the mathematical brain of a computer. When you understand the ALU’s design and function, you understand a fundamental part of modern computers. It is THE thing that does all of the computation in a computer. So in a way, everything uses the ALU.

Fun Fact: This is the Intel 74181. Released in 1970, it was the first complete ALU to fit inside of one single chip! This may sound silly to us, but in 1970, this was a huge deal.

An ALU is simply two units combined in one. It’s an arithmetic unit and logic unit at the same time. Let’s start with the arithmetic unit. The arithmetic unit is responsible for handling all of the numerical operations in a computer, like addition and subtraction. It also does many other simple tasks like add one to a number, which is called an increment operation. Today, we are going to be talking about computers adding two numbers, but with a twist…

The simplest adding circuit takes two binary digits and adds them together. In much simpler terms, we have two inputs, A and B, and we get one output, which is the sum. There are only four possible input combinations, which are…

Here’s a flashback. In binary, the “1” is true and “0” is false. The first three inputs exactly match the XOR boolean logic gate. But the fourth input combination is a special case. The fourth input is 1 + 1 = 2, but there is no 2 in binary, so as we talked about in the last part, the sum is 0 with the 1 being carried to the next column. In binary, the sum is 10 (not ten).

The output for our XOR gate is partially true, but we need an extra output wire for the carried “1”. Conveniently, we have a solution to this. In part 3, we talked about AND, and (no pun intended if that even counts as a pun…) that is exactly what we need. This is our half-adder.

If you want to add more than 1 + 1, we’re going to need a full adder. A full adder is more complicated, as it takes three bits as inputs, A+B+C. So the maximum input is 1 + 1 + 1.

We can build a full adder using half adders. To do this, we have to use a half adder to add A + B, and then feed that result and input C into another half adder. Lastly, we need an OR gate to check if either one of the carry bits were true. And that’s how you do it! We made a full adder!

Full Adder Diagram

In general, an overflow when the result of an addition is too large to be represented by the number of bits you are using This can usually cause errors and unexpected behavior. One of the most famous instances was the original Pac-Man arcade shelf. It used 8-bits to keep track of what level you were on. This meant that if you passed level 255 — the largest storable number in 8-bit, level 256 would be unbeatable. The ALU would overflow and cause many annoyances. That’s why eventually at least, we switched from 8-bit to 32-bit and 64-bit. (If you want a refresh on bits, I suggest going back to part four.)

Level 256

Using 16, 32, or even 64-bits makes the chances of overflowing down to near zero, but at the expense of having more gates in the circuit.

So now you know how your computers do all of its mathematical operations. We will be getting more in-depth about all of this stuff in later parts.

Part Six: Ram and Registers

In the last part, we talked about the ALUs of a computer and how they work, but there’s no point in calculating a result if you’re only going to throw it away. It would be useful to store that information, and maybe even run several operations in a row. That’s where computer memory comes into the spotlight.

If you’ve ever been in an RPG campaign (Think Undertale, Xenoblade, Final Fantasy, Pokemon, Elder Scrolls, Fallout, Warcraft, League of Legends, etc.) on your video game console (Switch, Xbox, PlayStation, I don’t even know if there are any other.) and your pet came by and tripped on the power cord and you lost all of your progress. Such a heartbreaking moment. Just imagine the agony of losing the progress you’ve made so far…

Of course Sans (from Undertale) would be happy.

The reason for your supposed loss is that your console makes use of Random Access Memory or RAM for short. It’s not just your console that makes use of this. Your smartphone, laptop, desktop, etc. The RAM stores things like the game state — as long as the power stays on.

In the smartphone world, many big-name companies use RAM as a selling point for their devices. For example, the iPhone 11 Pro’s RAM is 4GBs of RAM. The Samsung Galaxy S20’s 8GBs of RAM, and the Google Pixel 4’s is 6GB of RAM.

Another type of memory is called Persistent Memory, which can survive completely without power, and it’s used for various things. We’ll talk about Persistent Memory in later parts.

All of the logic circuits we’ve discussed so far move in one direction — always flowing forward. We can also create circuits that loop back on themselves. Let’s try taking an ordinary OR gate and feed the output back into the input. First, let’s set both inputs to zero

So 0 OR 0 is 0, and so this circuit always outputs to 0. If we were to flip input A to 1, 1 OR 1 is 1, so now the output of the OR gate is 1.

A fraction of a second later, the output is looped back into input B, so the OR gate sees that both inputs are now 1. 1 OR 1 is still 1 so there is no change with the output.

You can make this type of circuit with other types of gates such as AND gates. Let’s say we have our OR gate and an AND gate that both function similarly. We can “latch” them on together to create an AND-OR Latch.

This is specifically called a “Latch” because it “latches onto” a particular value and stays that way. The action of putting data into memory is called writing, which or which not you may have heard of. Getting the data out is called reading.

A group of latches is operating together is called a register, which holds a single number, and several bits in a register is called its width. Early computers used 8-bit registers, then 16, 32, and today, many computers have registers of 64-bits wide.

Going back to RAM, it’s kind of like a human’s short-term memory or working memory in some ways. It keeps track of things going on right now. Today, you can buy RAM that has more than a gigabyte or more of storage. For a small reference, a gigabyte is more than a billion bytes!

Samsung 32GB DDR4 RAM

There are also many other types of RAMs, such as DRAM, Flash Memory (no, not the weird SEGA Genesis game), and NVRAM. These all function similarly, but all have differences that makes them unique. Whether it may be the number of latches, logic gates, charge traps, etc. But fundamentally, all of these technologies store bits of information in massively nested matrices of memory cells. That’s a mouthful!

Part Seven: The Central Processing Unit

So, today we are going to be talking about processors and all of that good stuff! But, a quick warning. This is probably going to be the most complicated part I will publish. Once you know this stuff, you practically set for the rest of the series.

Anyway, just a small recap before we start. We went over ALUs which take in binary numbers and perform calculations, and we’ve also covered two different types of computer memory; RAM and Registers. Now it’s time for us to put everything we’ve learned so far and finally learn about the brain of the computer.

Intel CPU (This is not how an actual CPU looks. This is promotional work that looks cool.)

This is known as the CPU a.k.a. the Central Processing Unit. The CPU is in everything. Your phone, laptop, desktop, console, practically everything. The CPU’s job is to execute programs. Programs like Gmail, Microsoft Office, Safari, or even your copy of Minecraft or Fortnite, are all made up of a series of individual operations, more formerly known as instructions since they “instruct” what the computer to do. If these are mathematical operations like adding or subtracting, the CPU will configure the ALU to do the mathematical operation. Suppose it’s a memory instruction. In this case, the CPU will “talk” to the memory to read and write the desired values.

There are MANY parts in a CPU, so we are going to be laying them all out and covering them one by one.

First, we need memory for the CPU. This is where RAM comes in. To keep things simple, let’s assume that the RAM module has only 16 memory locations, each containing 8-bits. Let’s also add four 8-bit memory registers, labeled A, B, C, D. This will be used to temporarily store and manipulate values. We already knew that data can be stored in memory as binary values and programs can be stored in memory too.

We can assign IDs to each instruction that is supported by our CPU. In our hypothetical example, we use the first four bits to store our operation code (third column from the right). The final four bits specify where the data for the operation should come from. We also need two more registers to complete our CPU.

First, we need a register to keep track of where we are in a program. For this, we have to use an instruction address register, which (in simple terms) stores the memory address of the current instruction. We need the other register to store the current instruction, which we can call the instruction register. When we first boot up our computer, the registers all start at zero.

The first phase of a CPU’s operation is called the Fetch Phase. This is where we receive our first instruction. The next phase is called the Decode Phase. This phase is how the CPU finds out what the instruction is so that it can execute it (Run it, not kill it.). The Execute Phase is where the CPU starts to perform the instruction that was given earlier. When we finish this we have a control unit. Which is similar to something like this:

In this diagram, we have all of our components. We have the RAM Module, the Registers, the Operation Codes, and everything in between. The Control Unit is comparable to an orchestra conductor. The conductor directs everything that’s going on. When we connect our ALU, things get a little more complicated.

A little bit got cut off for some reason.

Now, you might notice that at the bottom of the diagram, there a small symbol inside of a circle. This is our clock trigger. As its name suggests, the clock triggers an electrical signal at an exact and regular interval. The signal is used by the Control Unit to advance the internal operation of the CPU.

Well, everything has its speed. The fastest a human can run is 28mph, so this rule also applies to the CPU as well. Electricity also takes some time to travel. The speed at which a CPU can carry out all of the phases we talked about is called the Clock Speed. This speed is measured in hertz — a unit of frequency. (And no, not the car rental service.) One hertz means one cycle per second.

Now, its time for some history. You thought you came here for computer science, but little did you know that it’s also time for everyone’s least favorite subject that we still ace; History! Ok, I’m getting off-topic.

Anyway, in 1971, the very first single-chip CPU was the Intel 4004. Despite being the first of its kind, the 4004 had a mind-blowing clock speed of 740 kilohertz! Just for some context, that’s 740,000 cycles per second! You might think that it’s fast, and for 1970s technology, that’s fast, but compared to our technology, that’s easily nothing. The phone or laptop or desktop that you’re reading this in right now is no doubt a few gigahertz — that’s billions of CPU cycles every… single… second! Next part, we are going to be beefing up our CPUs to make them even more powerful!

Part Eight: Instructions & Programs

So, in the last part, we talked about the brains of the computer; the CPU, which is, in my opinion, is the most important part of the computer. The thing that makes a CPU powerful in the first place is that it is programmable in the first place. In more simple terms, the CPU is a piece of hardware that is controlled by easy-to-modify software. This one is also going to be shorter than the previous part.

Let’s quickly revisit the computer memory we built in the previous part.

In the computer memory, each address contained 8-bits of data. In our CPU, the first four bits specified the operation code or opcode, and the second set of four bits specified an address or registers.

In memory address zero, we have 00101110. Again, those first four bits are our opcode, which corresponds to “LOAD_A” instruction. This instruction reads data from a location of memory specified in those last four bits of the instruction and saves it into Register A.

So in the instruction table, we used for our CPU in the previous part, our table only had four instructions which is way too easy for the CPU to understand, so let’s add more

Now we have a subtract function, which like add, operates on two registers. We also have a new instruction called JUMP. As the name implies, this causes the program “jumps” to a new location. This is useful if we want to change the order or choose to skip certain instructions. For example, a JUMP 0 would cause the program to move back to the beginning.

We also have a special version of JUMP named JUMP_NEGATIVE. This instruction only jumps the program if the ALU’s negative flag is set to true. The last instruction added is the HALT instruction, which stops the computer when the program is completed.

Our instruction, JUMP_NEGATIVE is one example of a conditional jump, but computers have other conditionals like JUMP IF EQUAL and JUMP IF GREATER.

Now, in modern-day computers, the instruction table has a plethora of different instructions that all serve different respective purposes within the CPU/Computer.

This all is the power of software. The software allows the hardware to do an infinite amount of things. The software also allows us to do something that hardware simply cannot do. Remember, out ALU alone didn’t have the functionality to divide numbers, it was the software that enabled the feature of dividing numbers.

Part Nine: Advanced CPU Designs

So, this is it. This is the last part of this section of the series. In the next section, we’re starting programming and how it works with the computer to respond to your commands. (And yes, you will learn programming languages!)

Anyway, let’s go over the last parts. In the early days of electronic computing, processors were typically made faster by improving the switching time of the transistors inside the chip. But just making transistors faster and efficient only went so far, so processor designers have developed various techniques to boost performance allowing not only simple instructions to run fast, but also performing much more elaborate operations.

In previous parts, we learned that computers had to go through long processes to divide numbers or complete complex problems. This approach uses up many clock cycles and can at some times be unreliable, so most computer processors today have divide as one of the instructions.

Modern computers now have special circuits for things like graphics operations, decoding compressed video, and encrypting files — all of which are operations that would take many clock cycles if it were done in standard operation. You may have heard of processors such as the MMX, 3DNow!, or SSE. These are processors with additional circuits that allow them to execute elaborate instructions, for things like gaming or encryption.

OVERWATCH (Video Game)

These extensions to the instruction set have grown over a long period of time, and once people started to write programs to take advantage of them, it’s hard to remove them. So instruction sets tend to keep getting larger and larger, keeping all of the older opcodes around for backwards compatibility.

For instance, the Nintendo Wii, which released in 2006, had backwards compatibility with the 2001 released Nintendo GameCube. This meant that Nintendo had to keep all of the opcodes in order for GameCube games to work with the Wii console.

A modern computer processor has thousands of different instructions, which utilize the complex circuitry in the computer. But all of this leads to another problem; getting data in and out of the CPU.

The next part of the next section will be out soon.

I’ll leave you with that. Keep on the lookout for more in this multi-part series. Thanks for reading!

Check out my YouTube channel: https://www.youtube.com/channel/UCNid3JwA-S_2DG56mQYJHHA

--

--