SECTIONS 1–3

Shreyas Kambhampati
52 min readJun 17, 2020

--

This goes through every section published so far… Enjoy!

SECTION ONE

Introduction:

Computers are today’s big thing. There are millions of computers everywhere that we depend on every day. Whether it’s to go to work, watch a movie, the possibilities are endless! Computers today are vital to our society. If all of the computers in our world were to turn off or malfunction for just a minute or two, chaos would more than likely take over and spread international panic.

The society that we’ve built off computers is dependent on computers. For example, I can guarantee that you are reading this on some type of computing device, whether if it’s a smartphone, laptop, tablet, etc. In this blog, I will be going over the basics of Computer Science (No, this will not teach you how to program at all. Not yet at least…) This blog will be going over the history, basics, parts of computing, and a lot more!

Part One: Early Computing

Computing may have been different a few decades/centuries ago, but the need for computing has practically stayed the same throughout. The earliest recognized computing device was the ABACUS, invented in 2500 B.C.E. in Mesopotamia.

The ABACUS was a hand-operated that accurately helped add and subtract many numbers. Although today, calculators and computers are a lot more popular and more efficient to use, the ABACUS remains in great use throughout the world.

Over the next 4000 years, humans developed all different types of computing devices, such as the Astrolabe, which help find the latitude for ships out at sea, and the Slide Rule, which helped with multiplication and division.

Before the 20th century, most people experienced computing through pre-computed tables assemble by the “human computers” (Men and women who were tasked with calculating problems were nicknamed “human computers”.)

Militaries were among the first to apply the computational skills to address their complex problems. For example, one problem the militaries faced was accurately firing artillery shells. In response to this, range tables were created, which allowed gunners to look at environmental conditions and the distance they wanted to fire. The table would then tell the gunner what angle to set the artillery cannons at. This proved to be a lot more accurate than before.

By the end of the 19th century, computers were mostly used for special-purpose tasks but had a few exceptions. By the end of the 20th century, computing had become more accessible to the masses and eventually evolved into what we have today.

Nintendo Entertainment System (1985) to Nintendo Switch (2017)

Apple II Computer (1977) to MacBook Pro (2019)

Part Two: Electronic Computing

One of the largest electro-mechanical computers built in the 20th century was the Harvard Mark I was one of the most efficient computers during the first half of the century. The Mark I was built by IBM which was, and still is, a huge player in the computer business. The Mark I was completed in 1944 and used for the Allies in World War II.

(Now, all of this might sound pretty random but trust me, this will be important moving forward.)

The Harvard Mark I carried 765,000 components, 3,000,000 connections, and 500 miles (804.672 km) of wire. The Harvard Mark I was capable of doing addition, subtraction, multiplication, division, complex problems, etc. The computer was also a general-purpose computer.

Harvard Mark I

Fun Fact: One of the earliest uses for this technology was the Manhattan Project.

The brains of these electro-mechanical computers (such as the Harvard Mark I) are known as relays. Relays are electrically controlled mechanical switches.

In a relay, there is a control wire that determines whether a circuit is opened or closed. The control wire connects to a coil of wire inside the relay. When the current flows through the coil, an electromagnetic field is created, which in turn, attracts a metal arm inside the relay that snaps it shut and completes the circuit. After this, the controlled circuit will be able to connect with other circuits.

Even though this might sound efficient, it’s otherwise. The mechanical arm inside of a relay has mass, and therefore cannot move as fast instantly between the opened or closed states.

For example, a good relay from the 1940s might have been capable of flicking back and forth around forty to fifty times. This might seem pretty fast, but it’s another story. It’s simply not fast enough to be useful for solving large and/or complex problems.

The Harvard Mark I could solve three addition/subtraction problems every second. Multiplication took six seconds and division took fifteen seconds. Any other complex problem, good luck with that, because those took more than a minute.

In addition to slow switching speeds, another major issue was wear & tear. Anything mechanical that moves will most definitely wear over a while, whether it’s a short period or a long period. Some might break entirely while others might become unreliable.

The Harvard Mark I had roughly 3500 relays, so many, many problems were faced while the computer was in operation.

Unsurprisingly, that’s not all of the problems the engineers behind these behemoths were challenged with. The huge machines were also infamous for attracting insects. In September of 1947, operators on the Harvard Mark II (no not the Mark I) pulled out a dead moth from one of the malfunctioning relays. From then on, whenever there was a malfunction occurred in the computer, it was simply referred to as “bug”. That’s where the term “Computer Bug” came from.

At this point, it was clear that a faster, more reliable, and more efficient alternative to the electromechanical relays was much needed for all of the computing to advance to the next level. Luckily, that much-needed alternative did already exist.

In 1904, an English physicist known as John Ambrose Fleming developed a new electrical component called a thermionic valve, which housed two electrodes in an airtight glass bulb. This was the first vacuum tube. One of the electrodes could be heated, which would cause it to emit electrons. This process was ultimately known as Thermionic Fission. The other electrode could then attract these electrons to create a flow, but only if it was positively charged. In the situation that it had a negative or neutral charge, the electrons would no longer be attracted to each other.

Part Two: Electronic Computing

One of the largest electro-mechanical computers built in the 20th century was the Harvard Mark I was one of the most efficient computers during the first half of the century. The Mark I was built by IBM which was, and still is, a huge player in the computer business. The Mark I was completed in 1944 and used for the Allies in World War II.

(Now, all of this might sound pretty random but trust me, this will be important moving forward.)

The Harvard Mark I carried 765,000 components, 3,000,000 connections, and 500 miles (804.672 km) of wire. The Harvard Mark I was capable of doing addition, subtraction, multiplication, division, complex problems, etc. The computer was also a general-purpose computer.

Harvard Mark I

Fun Fact: One of the earliest uses for this technology was the Manhattan Project.

The brains of these electro-mechanical computers (such as the Harvard Mark I) are known as relays. Relays are electrically controlled mechanical switches.

In a relay, there is a control wire that determines whether a circuit is opened or closed. The control wire connects to a coil of wire inside the relay. When the current flows through the coil, an electromagnetic field is created, which in turn, attracts a metal arm inside the relay that snaps it shut and completes the circuit. After this, the controlled circuit will be able to connect with other circuits.

Even though this might sound efficient, it’s otherwise. The mechanical arm inside of a relay has mass, and therefore cannot move as fast instantly between the opened or closed states.

For example, a good relay from the 1940s might have been capable of flicking back and forth around forty to fifty times. This might seem pretty fast, but it’s another story. It’s simply not fast enough to be useful for solving large and/or complex problems.

The Harvard Mark I could solve three addition/subtraction problems every second. Multiplication took six seconds and division took fifteen seconds. Any other complex problem, good luck with that, because those took more than a minute.

In addition to slow switching speeds, another major issue was wear & tear. Anything mechanical that moves will most definitely wear over a while, whether it’s a short period or a long period. Some might break entirely while others might become unreliable.

The Harvard Mark I had roughly 3500 relays, so many, many problems were faced while the computer was in operation.

Unsurprisingly, that’s not all of the problems the engineers behind these behemoths were challenged with. The huge machines were also infamous for attracting insects. In September of 1947, operators on the Harvard Mark II (no not the Mark I) pulled out a dead moth from one of the malfunctioning relays. From then on, whenever there was a malfunction occurred in the computer, it was simply referred to as “bug”. That’s where the term “Computer Bug” came from.

At this point, it was clear that a faster, more reliable, and more efficient alternative to the electromechanical relays was much needed for all of the computing to advance to the next level. Luckily, that much-needed alternative did already exist.

In 1904, an English physicist known as John Ambrose Fleming developed a new electrical component called a thermionic valve, which housed two electrodes in an airtight glass bulb. This was the first vacuum tube. One of the electrodes could be heated, which would cause it to emit electrons. This process was ultimately known as Thermionic Fission. The other electrode could then attract these electrons to create a flow, but only if it was positively charged. In the situation that it had a negative or neutral charge, the electrons would no longer be attracted to each other.

Vacuum Tube

By applying a positive charge to the control electrode, it allows the flow of electrons as before (original design), but if the control electrode was given a negative charge, the flow would be prevented. By changing the control wire, one could open or close the circuit.

The idea is similar to a relay but importantly, vacuum tubes have no moving parts. In the 20th century, this was a recipe for success. As I stated earlier, the more mechanical moving parts an object has, the more it wears and tears over time. More importantly, they could switch back and forth thousands of times per second. For comparison, the relays could switch back and forth around forty to fifty times per second, while the thermionic valve could switch back and forth thousands of times.

This triode vacuums (another name for the thermionic valve) eventually became the basis for radios and long-distance telephones, just to name a few.

Later on in the 20th century, to reduce cost and size, as well as improving reliability and speed, a new and more efficient electronic would be needed. In 1947, Bell Laboratory scientists John Bardeen, Walter Brattain, and William Shockley invented the transistor and with it, a whole new era of computing had begun.

The physics behind transistors is pretty complex. No joke! It relies on quantum mechanics, and you know… I’m just a 7th grader… so we are just going over the basics.

A transistor is a switch that can be opened or closed by applying electrical power via control wire. Transistors usually have two electrodes separated by a material that can conduct electricity, or at other times resist. (Semiconductor).

The first transistor shows a huge amount of promise. It could switch between on and off over 10,000 times per second. That’s enormous for tech in the 20th century. Also, transistors, unlike relays and vacuum tubes, were made out of a solid material.

Transistors could also be made smaller than the smallest possible vacuum tubes. This led to way cheaper computers and technology.

Part Three: Boolean Logic & Logic Gates

In the last part, we talked about the evolution from large electro-mechanical computers that used relays and vacuum tubes, to the update and cost-efficient transistor computers, that can turn the flow of electricity on and off multiple times. Even with just two states of electricity from computers, we are still able to accurately represent information. Those representations are call binary, which means “two states”.

At first, you might think that two states of electricity are nowhere near enough to work with, and you would be right, believe it or not. Two states are exactly what you need to represent true and false. On a computer, an “on” state (when electricity is flowing) would be represented with true, and an “off” state (when electricity isn’t flowing) would be represented with false. We can also write binary as 0s and 1s, instead of the traditional true and false.

Binary 0s & 1s

Now, it’s possible to use transistors for more than just turning electrical currents on and off.

Some early computer used ternary. Ternary is, not one, not two, but instead three states! But if you thought that was big, there was also quinary which had five freaking states! But… there was a problem. The more states there are, the harder it is to keep them all separate. Let’s say your smartphone battery (if you have one) is low on charge or there is electrical noise from a microwave, the signals can get mixed up. Mixed up like noodles. And this problem only gets worse when transistors are changing states millions of times per second.

So, placing two signals as far as possible, using just “on and off” gives us the clearest signal to reduce these issues.

Another reason why computers use binary is that an entire category of mathematics already existed that dealt with true and false purposes. It had already figured out all of the necessary rules and operations for handling them. This was called Boolean Algebra.

A man named George Boole (from which Boolean got its name) was a self-taught English mathematician in the 1800s. He was interested in logical statements that went beyond Aristotle’s approach to logic, which was kind of, sort of grounded by philosophy. Boole’s approach to logic allowed the truth to be formally and systematically proven, through logic equations.

In regular Algebra, the values of variables are numbers, and operations are things like addition and multiplication. But in Boolean Algebra, the values of variables are true and false, and the operations are logical. There are three fundamental operations in Boolean Algebra: a NOT, an AND, and an OR operation. These operations turn out to be useful (more than me.), so we are going to take a look at them individually.

This table does not show all of the operations.

A “NOT” takes a single boolean value (true or false) and negates it. It flips true to false, and false to true. The “AND” operation takes two inputs but still has only one output. In this case, the output is only true if both of the inputs are true. The last boolean operation is “OR”. This is where only one input has to be true to make the output true. The other input(s) can be false.

Another boolean operation (not fundamental) in computation is called an “Exclusive OR” or “XOR” for short. “XOR” is like a regular “OR” but there is one main difference. If both inputs are true, the “XOR” is false. The only time an “XOR” is true is if one input is true and the other is false. This way, the “XOR” is true. The “XOR” is really useful (again, more than me.) so we’ll get more in-depth about it in the latter part of the series.

Part Four: Representing with Binary

In the previous part, I discussed how a single binary value can be able to represent a number(s). Instead of true and false, we can call these two states 0s and 1s, which is pretty useful, no joke. If we want to represent larger things, we just have to add more binary digits. This works exactly how decimals also function. With decimal numbers, there are “only” ten possible values that a single digit can be (0–9). To get numbers larger than nine, we simply add more digits to the front.

Each binary digit is called a “bit”. When adding binary numbers, we use 8-bit numbers, with the lowest number being 0, which requires all 8-bits to be set to zero, and the highest being 255, which requires all 8-bits to be set to one. This leaves us with 256 different values. You might have heard of 8-bit computers, audio, graphics, etc. Remember those video game consoles from the 80s, I’m talking the NES, SEGA Master System, even the GameBoy. Those all used 8-bit graphics to play games like Super Mario Bros., Duck Hunt, Tetris, The Legend of Zelda, etc.

GameBoy (Left) NES (Top Right) SEGA Master System (Bottom Right)

The NES, GameBoy, Master System all might have been different, but all operated in 8-bit chunks. 256 different values aren’t enough to work with, so it meant that 8-bit games (Super Mario Bros., Tetris, The Legend of Zelda, etc) were limited to only 256 different colors for their graphics.

Super Mario Bros. (1985) & New Super Mario Bros. U Deluxe (2019)

Ok, enough about retro gaming, let’s get back on track. 8-bits is common in computing, so it has its special word: byte. A byte is 8-bits so if you have ten bytes, you have 80-bits. You have probably heard kilobytes, megabytes, gigabytes, terabytes, and so on. These designations indicate different scales of data. A gigabyte is 8e+9, a megabyte is 8e+6, a terabyte is 8e+12, and a kilobyte is 8000.

Back to some retro gaming here. You’ve probably heard of 32-bits or 64-bits (Think Nintendo 64, Playstation, or even Sega Saturn). What this means is that those devices operate in 32 or 64-bit chunks. But, most modern devices also use 64-bit processors. This list includes the Nintendo Switch, PlayStation 4, Xbox One, the list goes on. Possibly, the device you are reading this on maybe a device with either 32-bit or 64-bits. Now, this may sound weird, but our device runs better and faster because of the other processers and another tech inside of it. The whole thing doesn’t depend on the 32/64-bit processor.

Anyway, the entire thing doesn’t The largest number you can express with 32-bits is 4.3 billion, which is 32 1s in binary. This is why our photos (taken on a smartphone) are so smooth and pretty. The photos you see are composed of millions of colors since our computers use 32-bit graphics.

But there’s a twist. The other side of numbers. Well, not everything’s a positive number, just like many college students’ bank accounts. So we need a way to represent negative numbers. Most computers nowadays use the first bit for the negative: 1 for negative and 0 for positive, and use the remaining 31 bits for the number itself. That gives us a range of approximately plus or minus two billion. While this is a pretty big range of numbers, it not enough for many tasks. For example, there are seven billion people on earth, and the United States' national debt is around 23 trillion dollars in debt. (Because of coronavirus, that number is going to rise, and another thing, who the heck do we owe that money to!)

This is why 64-bit numbers are useful for tasks like previously stated. The largest number a 64-bit number can represent is around 9.2 quintillions! I know, it’s a lot. Hopefully, the national debt stays under.

Importantly, as we will discuss in a later part, computers must identify locations in their memory. This is known as addresses which store and retrieve values. As computers have grown to gigabytes and terabytes, it was necessary to have 64-bit addresses as well. In addition to positive and negative numbers, computers must handle numbers that are not whole numbers. You know them, you either hate or love them, give it up for decimals and fractions! This includes 1.98, 1 6/7, 6.98, etc.

These numbers are called floating-point numbers since the decimal point can float around in the number. Multiple different methods have been created to represent floating-point numbers. The most common is the IEEE 754 standard. This standard store decimal values sort of like scientific notation. For example, a number like 625.9 can be written as 0.6259 * ¹⁰³. There are two important numbers here: 0.6259 is called the significand, and 3 is the exponent.

In a 32-bit floating number, the first bit is used for the sign of the number (positive or negative). The next 8-bits are used for storing the exponent and the remaining 23-bits are used for the significand.

Ok, we’ve talked a lot about numbers, but your name is probably composed of letters. (wait, why did I say probably?!?). Anyway, it’s extremely useful for computers to represent text. However, rather than having a special form of storage for letters, computers simply use numbers to represent letters. Enter, ASCII, the American Standard Code for Information Interchange. Invented in 1963, ASCII was a 7-bit code enough to store 128 different values. With this, it could encode capital letters, lowercase letters, digits 0–9, and symbols like @, #, ! etc. But, this design had many problems. For example, it was only designed for the English language.

Unicode. This was the format to rule across computing. Developed in 1992, it was a universal encoding system that did away with all of the faulty systems of the past.

Part Five: How Computers Calculate

Representing and storing numbers is an important function of a computer, but the real goal is computation. or manipulating numbers in a structured and purposeful way, like adding two numbers together (1+1). These types of operations are handled by a computer’s Arithmetic and Logic Unit, a.k.a. the ALU. The ALU is the mathematical brain of a computer. When you understand the ALU’s design and function, you understand a fundamental part of modern computers. It is THE thing that does all of the computation in a computer. So in a way, everything uses the ALU.

Fun Fact: This is the Intel 74181. Released in 1970, it was the first complete ALU to fit inside of one single chip! This may sound silly to us, but in 1970, this was a huge deal.

An ALU is simply two units combined in one. It’s an arithmetic unit and logic unit at the same time. Let’s start with the arithmetic unit. The arithmetic unit is responsible for handling all of the numerical operations in a computer, like addition and subtraction. It also does many other simple tasks like add one to a number, which is called an increment operation. Today, we are going to be talking about computers adding two numbers, but with a twist…

The simplest adding circuit takes two binary digits and adds them together. In much simpler terms, we have two inputs, A and B, and we get one output, which is the sum. There are only four possible input combinations, which are…

Here’s a flashback. In binary, the “1” is true and “0” is false. The first three inputs exactly match the XOR boolean logic gate. But the fourth input combination is a special case. The fourth input is 1 + 1 = 2, but there is no 2 in binary, so as we talked about in the last part, the sum is 0 with the 1 being carried to the next column. In binary, the sum is 10 (not ten).

The output for our XOR gate is partially true, but we need an extra output wire for the carried “1”. Conveniently, we have a solution to this. In part 3, we talked about AND, and (no pun intended if that even counts as a pun…) that is exactly what we need. This is our half-adder.

If you want to add more than 1 + 1, we’re going to need a full adder. A full adder is more complicated, as it takes three bits as inputs, A+B+C. So the maximum input is 1 + 1 + 1.

We can build a full adder using half adders. To do this, we have to use a half adder to add A + B, and then feed that result and input C into another half adder. Lastly, we need an OR gate to check if either one of the carry bits were true. And that’s how you do it! We made a full adder!

Full Adder Diagram

In general, an overflow when the result of an addition is too large to be represented by the number of bits you are using This can usually cause errors and unexpected behavior. One of the most famous instances was the original Pac-Man arcade shelf. It used 8-bits to keep track of what level you were on. This meant that if you passed level 255 — the largest storable number in 8-bit, level 256 would be unbeatable. The ALU would overflow and cause many annoyances. That’s why eventually at least, we switched from 8-bit to 32-bit and 64-bit. (If you want a refresh on bits, I suggest going back to part four.)

Level 256

Using 16, 32, or even 64-bits makes the chances of overflowing down to near zero, but at the expense of having more gates in the circuit.

So now you know how your computers do all of its mathematical operations. We will be getting more in-depth about all of this stuff in later parts.

Part Six: Ram and Registers

In the last part, we talked about the ALUs of a computer and how they work, but there’s no point in calculating a result if you’re only going to throw it away. It would be useful to store that information, and maybe even run several operations in a row. That’s where computer memory comes into the spotlight.

If you’ve ever been in an RPG campaign (Think Undertale, Xenoblade, Final Fantasy, Pokemon, Elder Scrolls, Fallout, Warcraft, League of Legends, etc.) on your video game console (Switch, Xbox, PlayStation, I don’t even know if there are any other.) and your pet came by and tripped on the power cord and you lost all of your progress. Such a heartbreaking moment. Just imagine the agony of losing the progress you’ve made so far…

Of course Sans (from Undertale) would be happy.

The reason for your supposed loss is that your console makes use of Random Access Memory or RAM for short. It’s not just your console that makes use of this. Your smartphone, laptop, desktop, etc. The RAM stores things like the game state — as long as the power stays on.

In the smartphone world, many big-name companies use RAM as a selling point for their devices. For example, the iPhone 11 Pro’s RAM is 4GBs of RAM. The Samsung Galaxy S20’s 8GBs of RAM, and the Google Pixel 4’s is 6GB of RAM.

Another type of memory is called Persistent Memory, which can survive completely without power, and it’s used for various things. We’ll talk about Persistent Memory in later parts.

All of the logic circuits we’ve discussed so far move in one direction — always flowing forward. We can also create circuits that loop back on themselves. Let’s try taking an ordinary OR gate and feed the output back into the input. First, let’s set both inputs to zero

So 0 OR 0 is 0, and so this circuit always outputs to 0. If we were to flip input A to 1, 1 OR 1 is 1, so now the output of the OR gate is 1.

A fraction of a second later, the output is looped back into input B, so the OR gate sees that both inputs are now 1. 1 OR 1 is still 1 so there is no change with the output.

You can make this type of circuit with other types of gates such as AND gates. Let’s say we have our OR gate and an AND gate that both function similarly. We can “latch” them on together to create an AND-OR Latch.

This is specifically called a “Latch” because it “latches onto” a particular value and stays that way. The action of putting data into memory is called writing, which or which not you may have heard of. Getting the data out is called reading.

A group of latches is operating together is called a register, which holds a single number, and several bits in a register is called its width. Early computers used 8-bit registers, then 16, 32, and today, many computers have registers of 64-bits wide.

Going back to RAM, it’s kind of like a human’s short-term memory or working memory in some ways. It keeps track of things going on right now. Today, you can buy RAM that has more than a gigabyte or more of storage. For a small reference, a gigabyte is more than a billion bytes!

Samsung 32GB DDR4 RAM

There are also many other types of RAMs, such as DRAM, Flash Memory (no, not the weird SEGA Genesis game), and NVRAM. These all function similarly, but all have differences that makes them unique. Whether it may be the number of latches, logic gates, charge traps, etc. But fundamentally, all of these technologies store bits of information in massively nested matrices of memory cells. That’s a mouthful!

Part Seven: The Central Processing Unit

So, today we are going to be talking about processors and all of that good stuff! But, a quick warning. This is probably going to be the most complicated part I will publish. Once you know this stuff, you practically set for the rest of the series.

Anyway, just a small recap before we start. We went over ALUs which take in binary numbers and perform calculations, and we’ve also covered two different types of computer memory; RAM and Registers. Now it’s time for us to put everything we’ve learned so far and finally learn about the brain of the computer.

Intel CPU (This is not how an actual CPU looks. This is promotional work that looks cool.)

This is known as the CPU a.k.a. the Central Processing Unit. The CPU is in everything. Your phone, laptop, desktop, console, practically everything. The CPU’s job is to execute programs. Programs like Gmail, Microsoft Office, Safari, or even your copy of Minecraft or Fortnite, are all made up of a series of individual operations, more formerly known as instructions since they “instruct” what the computer to do. If these are mathematical operations like adding or subtracting, the CPU will configure the ALU to do the mathematical operation. Suppose it’s a memory instruction. In this case, the CPU will “talk” to the memory to read and write the desired values.

There are MANY parts in a CPU, so we are going to be laying them all out and covering them one by one.

First, we need memory for the CPU. This is where RAM comes in. To keep things simple, let’s assume that the RAM module has only 16 memory locations, each containing 8-bits. Let’s also add four 8-bit memory registers, labeled A, B, C, D. This will be used to temporarily store and manipulate values. We already knew that data can be stored in memory as binary values and programs can be stored in memory too.

We can assign IDs to each instruction that is supported by our CPU. In our hypothetical example, we use the first four bits to store our operation code (third column from the right). The final four bits specify where the data for the operation should come from. We also need two more registers to complete our CPU.

First, we need a register to keep track of where we are in a program. For this, we have to use an instruction address register, which (in simple terms) stores the memory address of the current instruction. We need the other register to store the current instruction, which we can call the instruction register. When we first boot up our computer, the registers all start at zero.

The first phase of a CPU’s operation is called the Fetch Phase. This is where we receive our first instruction. The next phase is called the Decode Phase. This phase is how the CPU finds out what the instruction is so that it can execute it (Run it, not kill it.). The Execute Phase is where the CPU starts to perform the instruction that was given earlier. When we finish this we have a control unit. Which is similar to something like this:

In this diagram, we have all of our components. We have the RAM Module, the Registers, the Operation Codes, and everything in between. The Control Unit is comparable to an orchestra conductor. The conductor directs everything that’s going on. When we connect our ALU, things get a little more complicated.

A little bit got cut off for some reason.

Now, you might notice that at the bottom of the diagram, there a small symbol inside of a circle. This is our clock trigger. As its name suggests, the clock triggers an electrical signal at an exact and regular interval. The signal is used by the Control Unit to advance the internal operation of the CPU.

Well, everything has its speed. The fastest a human can run is 28mph, so this rule also applies to the CPU as well. Electricity also takes some time to travel. The speed at which a CPU can carry out all of the phases we talked about is called the Clock Speed. This speed is measured in hertz — a unit of frequency. (And no, not the car rental service.) One hertz means one cycle per second.

Now, its time for some history. You thought you came here for computer science, but little did you know that it’s also time for everyone’s least favorite subject that we still ace; History! Ok, I’m getting off-topic.

Anyway, in 1971, the very first single-chip CPU was the Intel 4004. Despite being the first of its kind, the 4004 had a mind-blowing clock speed of 740 kilohertz! Just for some context, that’s 740,000 cycles per second! You might think that it’s fast, and for 1970s technology, that’s fast, but compared to our technology, that’s easily nothing. The phone or laptop or desktop that you’re reading this in right now is no doubt a few gigahertz — that’s billions of CPU cycles every… single… second! Next part, we are going to be beefing up our CPUs to make them even more powerful!

Part Eight: Instructions & Programs

So, in the last part, we talked about the brains of the computer; the CPU, which is, in my opinion, is the most important part of the computer. The thing that makes a CPU powerful in the first place is that it is programmable in the first place. In more simple terms, the CPU is a piece of hardware that is controlled by easy-to-modify software. This one is also going to be shorter than the previous part.

Let’s quickly revisit the computer memory we built in the previous part.

In the computer memory, each address contained 8-bits of data. In our CPU, the first four bits specified the operation code or opcode, and the second set of four bits specified an address or registers.

In memory address zero, we have 00101110. Again, those first four bits are our opcode, which corresponds to “LOAD_A” instruction. This instruction reads data from a location of memory specified in those last four bits of the instruction and saves it into Register A.

So in the instruction table, we used for our CPU in the previous part, our table only had four instructions which is way too easy for the CPU to understand, so let’s add more

Now we have a subtract function, which like add, operates on two registers. We also have a new instruction called JUMP. As the name implies, this causes the program “jumps” to a new location. This is useful if we want to change the order or choose to skip certain instructions. For example, a JUMP 0 would cause the program to move back to the beginning.

We also have a special version of JUMP named JUMP_NEGATIVE. This instruction only jumps the program if the ALU’s negative flag is set to true. The last instruction added is the HALT instruction, which stops the computer when the program is completed.

Our instruction, JUMP_NEGATIVE is one example of a conditional jump, but computers have other conditionals like JUMP IF EQUAL and JUMP IF GREATER.

Now, in modern-day computers, the instruction table has a plethora of different instructions that all serve different respective purposes within the CPU/Computer.

This all is the power of software. The software allows the hardware to do an infinite amount of things. The software also allows us to do something that hardware simply cannot do. Remember, out ALU alone didn’t have the functionality to divide numbers, it was the software that enabled the feature of dividing numbers.

Part Nine: Advanced CPU Designs

So, this is it. This is the last part of this section of the series. In the next section, we’re starting programming and how it works with the computer to respond to your commands. (And yes, you will learn programming languages!)

Anyway, let’s go over the last parts. In the early days of electronic computing, processors were typically made faster by improving the switching time of the transistors inside the chip. But just making transistors faster and efficient only went so far, so processor designers have developed various techniques to boost performance allowing not only simple instructions to run fast, but also performing much more elaborate operations.

In previous parts, we learned that computers had to go through long processes to divide numbers or complete complex problems. This approach uses up many clock cycles and can at some times be unreliable, so most computer processors today have divide as one of the instructions.

Modern computers now have special circuits for things like graphics operations, decoding compressed video, and encrypting files — all of which are operations that would take many clock cycles if it were done in standard operation. You may have heard of processors such as the MMX, 3DNow!, or SSE. These are processors with additional circuits that allow them to execute elaborate instructions, for things like gaming or encryption.

OVERWATCH (Video Game)

These extensions to the instruction set have grown over a long period of time, and once people started to write programs to take advantage of them, it’s hard to remove them. So instruction sets tend to keep getting larger and larger, keeping all of the older opcodes around for backwards compatibility.

For instance, the Nintendo Wii, which released in 2006, had backwards compatibility with the 2001 released Nintendo GameCube. This meant that Nintendo had to keep all of the opcodes in order for GameCube games to work with the Wii console.

A modern computer processor has thousands of different instructions, which utilize the complex circuitry in the computer. But all of this leads to another problem; getting data in and out of the CPU.

SECTION TWO

Introduction:

Computers. Smartphones. TVs. Gaming Consoles. These are all different types of technology, and we previously examined how they were built and how the hardware functions. But what about the software? When I say software, I specifically mean the programming behind the computer. Computer programming is the method of creating and building an executable computer program to perform a specific computing event. You’re, in a way, “talking” to the computer in a language that it understands. In this section of the series, we WILL be going over programming languages, software, and many other things. And, yes! we are going to be learning to program. Enjoy!

Part One: Early Programming

Programming is one of the most popular terms when computer science comes to mind. You might think that the idea of programming is relatively new, but the need for it goes way back to the time where computers or machines didn’t even exist yet!

The need to program machines existed way before the development of computers. The most famous example of this was in textile manufacturing. Let’s say you wanted to make or weave a big red tablecloth. You would simply insert the red thread into a loom (whatever those are) and let it run. But what if you wanted to make more intricate patterns, like stripes or plaid? The workers who would make this would have to periodically reconfigure the loom as indicated by the pattern.

But this method was labor-intensive, which made patterned fabrics expensive. In response to this, Joseph Marie Jacquard created a programmable textile loom, which he first demonstrated in 1801.

Jacquard Loom

The pattern for each row of the cloth was determined by a punch card. The presence or absence of whole in the card determined if a specific thread was held high or low in the loom. The cross thread called the weft passed above or below the thread. To vary the pattern across rows, the punch cards were arranged in long chains. This formed a sequence of instructions for the loom.

Many consider Jacquard’s loom to be one of the earliest forms of programming. Punched cards turned out to be a reliable and cheap alternative to the previous labor-intensive method used before. Almost a century later, the punch card method was used to help collect data for the 1890 census. When the punch card was inserted into a tabulating machine, a hole would cause the running total for that specific answer to be increased by one.

It’s important to note that early tabulating machines were not truly computers since they could only do one thing — tabulate things. Over the next 60 years, these business machines grew in capabilities, eventually adding more features. Until where we are today.

In this day and time, we created and are using different and more efficient methods to get work done. In the next part, we’re going to take a look at various programming languages, and soon enough, we’re going to be learning those various programming languages!

Part Two: The First Programming Languages

So far in the series, we’ve mainly talked about hardware — the physical components in a computer. However, in today’s part, we’re going to be talking about the opposite of hardware: software

For much of this series we’ve been talking about machine code, or the 1’s and 0’s our computers read to perform operations known as binary code, but giving our computers instructions in 1’s and 0’s is incredibly inefficient, and a “higher-level” language was needed. This led to the development of assembly code and assemblers that allow us to use operands and mnemonics to more easily write programs.

First, before we get deeper into this, let’s go through this quickly. As we’ve seen, computer hardware can only handle raw, binary instructions. This is the “language” computer processors natively speak. That’s the only language they are practically able to speak. This “language” is called Machine Language or Machine Code.

In the early days of computing, people had to write entire programs in machine code. More specifically, they’d first write a high-level version of a program on paper.

An informal, high-level description of a program like this is called Pseudo-Code. When the program was all figured out on paper, they’d painstakingly expand and translate it into binary machine code by using things like opcode.

After the translation was complete, the program could be fed into the computer and run.

By the late 1940s and 1950s, programmers had developed slightly higher-level languages, that were a lot more human-readable. Opcodes were given simple names, called mnemonics, which were followed by operands to form instructions. So instead of having to write instructions as a bunch on 1s and 0s, programmers could write something like “LOAD_A 14”.

Of course, a CPU has no idea what “LOAD_A 14” even is. Computers don’t understand text-based language, but rather binary. Because of this, programmers came up with a clever trick. They created reusable helper programs in binary that could read text-based instructions as well as assemble them into the corresponding binary instructions automatically. This program is called an Assembler. It reads a program written in an Assembly Language and converts it to native machine code. “LOAD_A 14” is an example of assembly instruction.

Part Three: Programming Basics: Statements & Functions

This part is practically going to be our segway into the entire world of programming. We’re going to learn about programming in this part, and finally start it soon. Trust me, I’m pumped for this entire thing.

But first, a quick brief about programming. Computer programming is the process of designing and building an executable computer program to accomplish a specific computing result. Programming involves tasks such as analysis, generating algorithms, profiling algorithms’ accuracy and resource consumption, and the implementation of algorithms in a chosen programming language. (Programming is most commonly referred to as coding).

Tasks accompanying and related to programming include testing, debugging, source code maintenance, implementation of build systems, and management of derived artifacts, such as the machine code of computer programs. These might be considered part of the programming process, but often the term software development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of code.

In the last episode, we discussed how writing programs in native machine code and having to contend with so many low-level details was a huge impediment to writing complex programs. To abstract away many of these low-level details, Programming languages were developed that let programmers concentrate on solving a problem with computation and less on nitty-gritty hardware details.

Today, we’re going to continue that idea we talked about, and introduce some cool new ideas that go with what we talked about now.

Just like spoken languages, programming languages have statements. These are individual complete thoughts. The set of rules that govern the structure and composition of a statement, in a language is called syntax. the English language has syntax, and so do all programming languages.

“A equals 5” is a programming language statement. In this case, the statement says a variable named A has the number 5 stored in it. This is called an assignment statement because we’re assigning a value to a variable. To express more complex things, we need a series of statements, like “A is 5, B is ten, C equals A plus B”. This program tells the computer to set variable ‘A’ equal to 5, variable ‘B’ to 10, and finally, to add ‘A’ and ‘B’ together, and put that result, which is 15, into — you guessed it — to variable ‘C’.

A program, which is a list of instructions, is a bit like a recipe: boil water, add noodles, wait ten minutes, enjoy. In the same way, the program starts at the first statement and runs down one at a time until it hits the end.

Part Four: Intro to Algorithms

In this part, we are going to be discussing algorithms! Algorithms can be really cool and once you understand it, it gets even more cooler.

In mathematics and computer science, an algorithm is a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation. Algorithms are always unambiguous and are used as specifications for performing calculations, data processing, and other tasks.

As an effective method, an algorithm can be expressed within a finite amount of space and time, and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing “output” and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.

One of the most storied algorithmic problems in all of computer science is sorting, as in sorting names or numbers. Computers sort all the time. Looking for the cheapest airfare, arranging our email by most recently sent, or scrolling your contacts by last name — those all require sorting.

You might think “sorting isn’t so tough… how many algorithms can there possibly be?”. The answer is a lot! Computer Scientists have spent decades inventing algorithms for sorting, with cool names like Bubble Sort and Spaghetti Sort.

Part Five: The Man Himself, Alan Turing

Hey everyone! Today we’re doing something a little different. We are going to taking a step back from the world of programming and software and discuss the person who formulated many of the theoretical concepts that underlie modern computation — the father of computer science himself: Alan Turing.

Alan Turing was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalization of the concepts of algorithm and computation with the Turing Machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence.

Now that you know who he is, let’s go more in-depth about his contributions to our world.

Alan Mathison Turing was born in 1912 and showed an incredible aptitude for math and science throughout his childhood. A few decades later, during the Second World War, Turing was a leading participant in the breaking of German ciphers at Bletchley Park. From September 1938, Turing worked part-time with the Government Code and Cypher School, the British codebreaking organization. He concentrated on cryptanalysis of the Enigma cipher machine used by Nazi Germany, together with Dilly Knox, a senior GC&CS codebreaker. Soon after the July 1939 Warsaw meeting at which the Polish Cipher Bureau gave the British and French details of the wiring of Enigma machine’s rotors and their method of decrypting Enigma machine’s messages, Turing and Knox developed a broader solution.

By using statistical techniques to optimize the trial of different possibilities in the code-breaking process, Turing made an innovative contribution to the subject. He wrote two papers discussing mathematical approaches, titled The Applications of Probability to Cryptography and Paper on Statistics of Repetitions, which were of such value to GC&CS and its successor GCHQ that they were not released to the UK National Archives until April 2012, shortly before the centenary of his birth. A GCHQ mathematician, “who identified himself only as Richard,” said at the time that the fact that the contents had been restricted for some 70 years demonstrated their importance and their relevance to post-war cryptanalysis. Turing had a reputation for eccentricity at Bletchley Park.

Within weeks of arriving at Bletchley Park, Turing had specified an electromechanical machine called the bombe, which could break Enigma more effectively than the Polish bomba kryptologiczna, from which its name was derived. The bombe, with an enhancement suggested by mathematician Gordon Welchman, became one of the primary tools, and the major automated one, used to attack Enigma-enciphered messages. A complete and working replica of a bombe now at The National Museum of Computing on Bletchley Park The bombe searched for possible correct settings used for an Enigma message (i.e., rotor order, rotor settings, and plugboard settings) using a suitable crib: a fragment of probable plaintext. For each possible setting of the rotors (which had on the order of 1019 states, or 1022 states for the four-rotor U-boat variant), the bombe performed a chain of logical deductions based on the crib, implemented electromechanically.[citation needed.

The bombe detected when a contradiction had occurred and ruled out that setting, moving on to the next. Most of the possible settings would cause contradictions and be discarded, leaving only a few to be investigated in detail. A contradiction would occur when an enciphered letter would be turned back into the same plaintext letter, which was impossible with the Enigma. The first bombe was installed on 18 March 1940. By late 1941, Turing and his fellow cryptanalysts Gordon Welchman, Hugh Alexander, and Stuart Milner-Barry were frustrated. Building on the work of the Poles, they had set up a good working system for decrypting Enigma signals, but their limited staff and bombes meant they could not translate all the signals. In the summer, they had considerable success, and shipping losses had fallen to under 100,000 tons a month; however, they badly needed more resources to keep abreast of German adjustments. They had tried to get more people and fund more bombes through the proper channels but had failed. On 28 October they wrote directly to Winston Churchill explaining their difficulties, with Turing as the first-named.

They emphasized how small their need was compared with the vast expenditure of men and money by the forces and compared with the level of assistance they could offer to the forces. As Andrew Hodges, biographer of Turing, later wrote, “This letter had an electric effect.” Churchill wrote a memo to General Ismay, which read: “ACTION THIS DAY. Make sure they have all they want on extreme priority and report to me that this has been done.” On 18 November, the chief of the secret service reported that every possible measure was being taken. The cryptographers at Bletchley Park did not know of the Prime Minister’s response, but as Milner-Barry recalled, “All that we did notice was that almost from that day the rough ways began miraculously to be made smooth.” More than two hundred bombes were in operation by the end of the war.

Part Six: Recap

Welcome to the last part of this section of the series. For this part, I decided to just have a recap of everything we learned in this section. Ready, let’s start!

Part One: Programming is one of the most popular terms when computer science comes to mind. You might think that the idea of programming is relatively new, but the need for it goes way back to the time where computers or machines didn’t even exist yet! The need to program machines existed way before the development of computers. The most famous example of this was in textile manufacturing. Let’s say you wanted to make or weave a big red tablecloth. You would simply insert the red thread into a loom (whatever those are) and let it run. But what if you wanted to make more intricate patterns, like stripes or plaid? The workers who would make this would have to periodically reconfigure the loom as indicated by the pattern.

Part Two: In the early days of computing, people had to write entire programs in machine code. More specifically, they’d first write a high-level version of a program on paper. An informal, high-level description of a program like this is called Pseudo-Code. When the program was all figured out on paper, they’d painstakingly expand and translate it into binary machine code by using things like opcode. After the translation was complete, the program could be fed into the computer and run. By the late 1940s and 1950s, programmers had developed slightly higher-level languages, that were a lot more human-readable. Opcodes were given simple names, called mnemonics, which were followed by operands to form instructions. So instead of having to write instructions as a bunch on 1s and 0s, programmers could write something like “LOAD_A 14”.

Part Three: Computer programming is the process of designing and building an executable computer program to accomplish a specific computing result. Programming involves tasks such as analysis, generating algorithms, profiling algorithms’ accuracy and resource consumption, and the implementation of algorithms in a chosen programming language. (Programming is most commonly referred to as coding). Tasks accompanying and related to programming include testing, debugging, source code maintenance, implementation of build systems, and management of derived artifacts, such as the machine code of computer programs. These might be considered part of the programming process, but often the term software development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of code.

Part Four: In mathematics and computer science, an algorithm is a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation. Algorithms are always unambiguous and are used as specifications for performing calculations, data processing, and other tasks. As an effective method, an algorithm can be expressed within a finite amount of space and time, and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing “output” and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.

Part Five: Alan Turing was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalization of the concepts of algorithm and computation with the Turing Machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence. Alan was born in 1912 and showed an incredible aptitude for math and science throughout his childhood. A few decades later, during the Second World War, Turing was a leading participant in the breaking of German ciphers at Bletchley Park. From September 1938, Turing worked part-time with the Government Code and Cypher School, the British codebreaking organization. He concentrated on cryptanalysis of the Enigma cipher machine used by Nazi Germany, together with Dilly Knox, a senior GC&CS codebreaker. Soon after the July 1939 Warsaw meeting at which the Polish Cipher Bureau gave the British and French details of the wiring of Enigma machine’s rotors and their method of decrypting Enigma machine’s messages, Turing and Knox developed a broader solution.

SECTION THREE

Introduction:

In the past sections, we talked about how computer work but now it’s the rise of personal computers. Personal Computers are some of the most bought electronic devices in the world, and it’s pretty easy to see why. At this pointing time, you basically need a computer for everything. You need it for work, school, and obviously for reading this… maybe? Anyway, PCs are really important and so are the graphics n the PC. The graphics are vital to many people who play video games on their PC. In this section, we’ll be talking about all of that stuff. Enjoy!

Part One: The Cold War and Consumerism

PC Gaming

The rise of the PC era started in the backdrop of The Cold War, and as you may know, the world was pretty tense. Pretty much after World War II concluded in 1945, there was heating tension between the world’s two new superpowers; The U.S. and the USSR. This was mainly due to the U.S. and most of the world disagreeing with the USSR’s economy which was based on communism. The Cold War had begun and with it, massive government spending on science and engineering.

The massive spending boosted rapid advances that simply weren’t possible in the commercial sector alone where projects were generally expected to recoup development costs through sales.

Computing was unlike machines of the past, which generally needed human physical abilities. At that point in time, computing had really just begun to shape the way America functions. Now let’s get in-depth about PCs.

A personal computer (PC) is a multi-purpose computer whose size, capabilities, and price make it feasible for individual use. Personal computers are intended to be operated directly by an end-user, rather than by a computer expert or technician. Unlike large costly minicomputers and mainframes, time-sharing by many people at the same time is not used with personal computers.

The personal computer was made possible by major advances in semiconductor technology. In 1959, the silicon integrated circuit (IC) chip was developed by Robert Noyce at Fairchild Semiconductor, and the metal-oxide-semiconductor (MOS) transistor was developed by Mohamed Atalla and Dawon Kahng at Bell Labs. The MOS integrated circuit was commercialized by RCA in 1964, and then the silicon-gate MOS integrated circuit was developed by Federico Faggin at Fairchild in 1968.

Faggin later used silicon-gate MOS technology to develop the first single-chip microprocessor, the Intel 4004, in 1971. The first microcomputers, based on microprocessors, were developed during the early 1970s. Widespread commercial availability of microprocessors, from the mid-1970s onwards, made computers cheap enough for small businesses and individuals to own.

Since the early 1990s, Microsoft operating systems and Intel hardware dominated much of the personal computer market, first with MS-DOS and then with Microsoft Windows. Alternatives to Microsoft’s Windows operating systems occupy a minority share of the industry. These include Apple’s macOS and free and open-source Unix-like operating systems, such as Linux.

PCs have definitely come a long way. For example, if you search up PC on Google, you get bombarded with PC games and ads.

PC gaming is… exactly what it sounds like! Gaming on PC. PCs have come a long way and can easily play huge games that can be nearly impossible to play on dedicated gaming consoles. For example, Microsoft Flight Simulator (2020) needs 150gb of storage to be playable on PC! That’s insane

Gameplay footage of Microsoft Flight Simulator

Part Two: The Personal Computer Revolution

Dell XPS 13 Laptop

In the previous part, we discussed the history behind the PC revolution. In this part, we’re going to be discussing what made the PC so appealing to the general public and why it was such massive success in America.

The history of the personal computer as mass-market consumer electronic devices effectively began in 1977 with the introduction of microcomputers, although some mainframe and minicomputers had been applied as single-user systems much earlier. A personal computer is one intended for interactive individual use, as opposed to a mainframe computer where the end user’s requests are filtered through operating staff or a time-sharing system in which one large processor is shared by many individuals. After the development of the microprocessor, individual personal computers were low enough in cost that they eventually became affordable consumer goods. Early personal computers — generally called microcomputers– were sold often in electronic kit form and in limited numbers, and were of interest mostly to hobbyists and technicians.

Computer terminals were used for time-sharing access to central computers. Before the introduction of the microprocessor in the early 1970s, computers were generally large, costly systems owned by large corporations, universities, government agencies, and similar-sized institutions. End-users generally did not directly interact with the machine but instead would prepare tasks for the computer on off-line equipment, such as card punches. A number of assignments for the computer would be gathered up and processed in batch mode. After the job had completed, users could collect the results. In some cases, it could take hours or days between submitting a job to the computing center and receiving the output.

A more interactive form of computer use developed commercially by the middle 1960s. In a time-sharing system, multiple computer terminals let many people share the use of one mainframe computer processor. This was common in business applications and in science and engineering. A different model of computer use was foreshadowed by the way in which early, pre-commercial, experimental computers were used, where one user had exclusive use of a processor.

In places such as Carnegie Mellon University and MIT, students with access to some of the first computers experimented with applications that would today be typical of a personal computer; for example, computer-aided drafting was foreshadowed by T-square, a program written in 1961, and an ancestor of today’s computer games was found in Spacewar! in 1962.

Some of the first computers that might be called “personal” were early minicomputers such as the LINC and PDP-8, and later on VAX and larger minicomputers from Digital Equipment Corporation (DEC), Data General, Prime Computer, and others. By today’s standards, they were very large (about the size of a refrigerator) and cost prohibitive (typically tens of thousands of US dollars).

However, they were much smaller, less expensive, and generally simpler to operate than many of the mainframe computers of the time. Therefore, they were accessible to individual laboratories and research projects. Minicomputers largely freed these organizations from the batch processing and bureaucracy of a commercial or university computing center.

In addition, minicomputers were relatively interactive and soon had their own operating systems. The minicomputer Xerox Alto was a landmark step in the development of personal computers because of its graphical user interface, bit-mapped high-resolution screen, large internal and external memory storage, mouse, and special software.

In 1983 Apple Computer introduced the first mass-marketed microcomputer with a graphical user interface, the Lisa. The Lisa ran on a Motorola 68000 microprocessor and came equipped with 1 megabyte of RAM, a 12-inch (300 mm) black-and-white monitor, dual 5¼-inch floppy disk drives and a 5-megabyte Profile hard drive. The Lisa’s slow operating speed and high price (US$10,000), however, led to its commercial failure. Drawing upon its experience with the Lisa, Apple launched the Macintosh in 1984, with an advertisement during the Super Bowl.

The Macintosh was the first successful mass-market mouse-driven computer with a graphical user interface or ‘WIMP’ (Windows, Icons, Menus, and Pointers). Based on the Motorola 68000 microprocessor, the Macintosh included many of the Lisa’s features at a price of US$2,495. The Macintosh was introduced with 128 kb of RAM and later that year a 512 kb RAM model became available. To reduce costs compared the Lisa, the year-younger Macintosh had a simplified motherboard design, no internal hard drive, and a single 3.5" floppy drive. Applications that came with the Macintosh included MacPaint, a bit-mapped graphics program, and MacWrite, which demonstrated WYSIWYG word processing.

While not a success upon its release, the Macintosh was a successful personal computer for years to come. This is particularly due to the introduction of desktop publishing in 1985 through Apple’s partnership with Adobe. This partnership introduced the LaserWriter printer and Aldus PageMaker (now Adobe PageMaker) to users of the personal computer. During Steve Jobs’ hiatus from Apple, a number of different models of Macintosh, including the Macintosh Plus and Macintosh II, were released to a great degree of success. The entire Macintosh line of computers was IBM’s major competition up until the early 1990s.

That was 37 years ago… and PCs have evolved so much more now!

Part Three: Graphical User Interfaces

My MacBook Air (2017) Interface

In today’s part, we’re talking about G.U.I.S.! You know what we’re just calling it GUI from now on. GUI stands for Graphical User Interface.

A graphical user interface is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicators such as primary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces, which require commands to be typed on a computer keyboard.

The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones, and smaller household, office, and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where the head-up display is preferred), or not including flat screens, like volumetric displays because the term is restricted to the scope of two-dimensional display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.

Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human-computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks. The visible graphical interface features of an application are sometimes referred to as chrome or GUI (pronounced gooey).

Typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows flexible structures in which the interface is independent from and indirectly linked to application functions, so the GUI can be customized easily. This allows users to select or design a different skin at will and eases the designer’s work to change the interface as user needs evolve.

Good user interface design relates to users more, and to system architecture less. Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message or drawing. Smaller ones usually act as a user-input tool. A GUI may be designed for the requirements of a vertical market as application-specific graphical user interfaces. Examples include automated teller machines (ATM), point of sale (POS) touchscreens at restaurants, self-service checkouts used in a retail store, airline self-ticketing and check-in, information kiosks in a public space, like a train station or a museum, and monitors or control screens in an embedded industrial application which employ a real-time operating system (RTOS).

By the 1980s, cell phones and handheld game systems also employed application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations.

By the 2000s to now, GUIs advanced a whole lot more to the point where we debatably rely on them in ways. Such as circumnavigation through our devices and many more.

Part Four: 3D Graphics

Welcome to the last part of this section. Today we’re talking about 3D graphics.

3D computer graphics, or three-dimensional computer graphics (in contrast to 2D computer graphics), are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. The resulting images may be stored for viewing later or displayed in real-time.

3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, 2D applications may use 3D techniques to achieve effects such as lighting, and, similarly, 3D may use some 2D rendering techniques. The objects in 3D computer graphics are often referred to as 3D models.

Unlike the rendered image, a model’s data is contained within a graphical data file. A 3D model is a mathematical representation of any three-dimensional object; a model is not technically a graphic until it is displayed. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or it can be used in non-graphical computer simulations and calculations. With 3D printing, models are rendered into an actual 3D physical representation of themselves, with some limitations as to how accurately the physical model can match the virtual model.

William Fetter was credited with coining the term computer graphics in 1961 to describe his work at Boeing. One of the first displays of computer animation was Futureworld, which included an animation of a human face and a hand that had originally appeared in the 1971 experimental short A Computer Animated Hand, created by the University of Utah students, Edwin Catmull and Fred Parke. 3D computer graphics software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3D computer graphics effects, written by Kazumasa Miyazawa and released in June 1978 for the Apple II.

3D computer graphics creation falls into three basic phases:

  1. 3D modeling — the process of forming a computer model of an object’s shape
  2. Layout and animation — the placement and movement of objects within a scene
  3. 3D rendering — the computer calculations that, based on light placement, surface types, and other qualities, generate the image

Modeling:

The model describes the process of forming the shape of an object. The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, and models scanned into a computer from real-world objects. Models can also be produced procedurally or via physical simulation. Basically, a 3D model is formed from points called vertices that define the shape and form polygons. A polygon is an area formed from at least three vertexes (a triangle). A polygon of n points is an n-gon. The overall integrity of the model and its suitability to use in animation depend on the structure of the polygons.

Layout & Animation:

Before rendering into an image, objects must be laid out in a scene. This defines spatial relationships between objects, including location and size. Animation refers to the temporal description of an object (i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion capture). These techniques are often used in combination. As with animation, physical simulation also specifies motion.

Rendering:

Rendering converts a model into an image either by simulating light transport to get photo-realistic images or by applying an art style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3D computer graphics software or a 3D graphics API. Altering the scene into a suitable form for rendering also involves 3D projection, which displays a three-dimensional image in two dimensions. Although 3D modeling and CAD software may perform 3D rendering as well (e.g. Autodesk 3ds Max or Blender), exclusive 3D rendering software also exists.

Man, 52 Mins goes fast…

--

--