Coding. Computer code is extremely important. Pretty much every electronic device you use relies on code to work. It’s the backbone behind how websites, apps, and video games are created. If code is this important, it’s a shock that most people don’t know how to code. Granted, not everyone has the same access to computers as books, and more people are learning to code because it’s becoming easier to learn through the internet and schools’ computer science programs.
However, some estimates say that only 0.5% of the world’s population knows some form of programming. Additionally, many people who code don’t necessarily understand how code works from the bottom up. This doesn’t mean people are ignorant, but that it’s just complicated to understand.
Think about it. Talking to the computer? What does that even mean, and how do you even do it?
The First Computer
In the 19th century, Charles Babbage proposed a machine called The Difference Engine. It was a device which could perform mathematical calculations, in which the numbers 0–9 were represented by positions on toothed wheels.
However, many consider a different Babbage creation to be the first computer — The Analytical Engine. It consisted of four parts:
- The reader, which took in data in the form of punched cards, and read them by how it could only pass pins through certain holes, thus meaning certain operations and numbers would be performed with different combinations of holes.
- The mill, which processed the cards.
- The store, which kept the data before processing.
- The printer, the output device.
What does this have to do with code? This is the beginning. All computers still look like this, but just in different forms, like our laptops. The reader is basically the input device on a computer, which can be a keyboard. The mill is the CPU, the brain of the computer, going through the data. The store is the memory, keeping our data in one place. The printer is the output, which can be shown on our screens when we type or press a button.
You might’ve heard of binary. Binary code is simply data represented by sequences of 0’s and 1’s. This is basically Babbage’s system of punched and not punched holes, because there are two different base values. However, computers are electronic devices, which are made mostly of circuits, so these modern computers simply pass a low-voltage current for 0 and high-voltage current for 1. Anyway…
These are called the binary digits, or bits. Eight bits of storage is a byte, a thousand bytes are a kilobyte, and million bytes are a megabyte. Computers operate in binary, meaning that all the calculations, actions, and processes done on them are done through just those binary digits.
A binary string of eight bits can represent any of 256 possible values and can, therefore, represent a wide variety of different items.
For example, “BINARY” would be encoded as “01000010 01001001 01001110 01000001 01010010 01011001”. A string can also be represented as a number, like the binary of ‘a’, 01000001, which can also be translated into ‘97’. These strings of bits can be displayed in code tables in different notations (octal, decimal, and hexadecimal), depending on what they need to be interpreted as.
This binary code is then processed and interpreted in the output, which could a screen, a speaker, or anything that’s a computer.
Ah, now we get to the code that most people work with today. Obviously, when people starting building and coding computers, it would’ve been a pain to use them though the tedious task of typing 1’s and 0’s. So, programming languages came into play.
These languages allow us humans to code more easily, in that they are structured closer to how we communicate, with words such as object, class, run, etc. Like so:
return(num1 + num2)
From this program, we can see that we have to insert two numbers into the “add” function, so the function will “return” the sum of the numbers. Programming languages were an innovation that allowed us to talk to our computers like this so much more easily, without the need to learn binary. Instead, the computer would then translate it into binary so it could read and execute it, after we made our code.
While it did take longer to translate back when computers were a new thing, like the 1900’s, it is much faster now, as small as milliseconds, where pretty much every person now uses programming languages to code.
It all ties back to the first computers. Whether it was punched holes or a bunch of 0’s and 1’s, talking to our computers was and always will be a marvel. We went from huge computers performing small calculations to some of the most powerful objects on the planet available to us in our pockets. It’s funny, though, because we’ve come so far with computers, while in their core, nothing changed. We’ve just found new ways to use them. (lots of them).
While my explanation of how computers work, may have been stretching the word “simple”, I do hope that you, the reader, has gained a better understanding of the 0’s and 1’s that run our world. It feels weird, the fact that so many aspects of life are dictated purely by these digits.
However, it’s kind of comforting. Now you now that the tech that runs medicine, games, finance, are all just run by 0’s and 1’s. Everything feels just a little less complicated, doesn’t it? And with all the things happening in our world, from politics to the environment, it only stresses us out more. Well, I hope that this article took off at least one small weight off your shoulders.
Now, when you exit this tab, try looking at the world in a new light, from the Analytical Engine to the wide range of programming languages.
Want to learn more about computers and their history? Check out some other informative articles here:
AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What's the Difference?
These terms are often used interchangeably, but what are the differences that make them each a unique technology?
History of Computers: A Brief Timeline
(Image: © Science Museum | Science & Society Picture Library) The computer was born not for entertainment or email but…