Information

Robert Mundinger
CodeParticles
Published in
6 min readOct 9, 2017

Everything in a computer is made up of 1s and 0s. A picture of your beautiful baby is just bunch of 1s and 0s. Your child is just a number! The same is trus of any picture, video or text. How can that be?

Your baby

Text

There are many different ways to represent the alphabet. Letters, braille, sign language, morse code and semaphores. All representing the same concept.

Computers Scientists came up with a way to do this with 0s and 1s by creating a system called ASCII (American Standard Code for Information Interchange). Each letter (and many symbols, including punctuation) were each given a 7 digit binary representation.

So when you press ‘h’ on your keyboard, you’re really typing the binary code 01101000.

Numbers

Binary uses the base 2 numeral system — it only uses 2 symbols (1, 0). Our alphabet, on the other hand, uses is 26 symbols. Chinese has about 5,000 symbols.

The number system that we use is called decimal, and it uses the base-10 numeral system. We use combinations of 10 symbols (0,1,2,3,4,5,6,7,8,9) to represent all numbers — likely because we have 10 fingers.

This is confusing because we’re used to using the word ‘ten’ to describe a numerical group to represent this number of things:

TEN ducks

But 10 is not some magical symbol reserved for that many things. 10 is a just way of starting over. Once you run out of symbols in base-10 (after 9) you start over again, combining 2 symbols to indicate increase. It’s the same with base-6 (what the Sumerians used — and why we have things like 24 hours). Here’s how you count in base 6:

0,1,2,3,4,5,6,10,11,12

You run out of symbols, and start over. Much like in Excel, once we reach Z we’ve run out of symbols and go to AA, AB, AC. Same concept. So this is how many 10 is in base 6:

Now, we count in binary:

0,1,10 — In binary, 10 is this many:

If we keep counting in binary — 0,1,10,11,100,101,110,111,1000 — we run out of symbols very quickly so we have to start over a lot. What we normally think of as 9 is 1000 in binary.

Confusing, I know. But the ducks helped, right?

Pictures

Remember pointillism? It’s an Impressionist painting technique developed by Georges Seurat, utilizing dots of color that, when seen from a distance, trick the human eye into seeing unified forms.

Similarly, a picture you see on your computer screen is nothing more than a bunch of dots arranged in a grid.

An HDTV is better than a normal TV because it shows more dots per square inch-the what the picture is so much clearer. Similarly, a more expensive digital camera captures more dots than a cheap one.

Each dot in the grid is a color. Every color in the world is just a combination of red, blue and yellow —for example, purple is an equal mixture of red and blue. Every color in the world can be represented by numbers.The more numbers you use to represent the colors on a screen, the richer the quality.

Numbers are used to indicate what color the computer should show. For example, if you see “rgb(0,0,255)” that means blue. On most screens, a dot can be one of 16,581,375 different color combinations (255*255*255=16,581,375).

Songs

What about audio?

Audio is just a sound wave; this means that is can be mathematically represented through numbers and visualized on a graph or chart. Numbers are used to show the variation from 0 (horizontal axis) reached by the sound wave at predetermined time intervals. If you sample a song, you can visualize the information and store the data in binary 0s and 1s.

The reason Neil Young hates MP3’s is because he doesn’t think there are enough samples per millisecond, making the quality worse than in an analog recording.

The reason a horrible pop singer (take your pic) can sound good on tape is that there is now software that can use computers to improve their pitch.

Video

Video, movies, TV — these are nothing more than a series of images quickly presented one after the other. If the transitions are fast enough, the human eye cannot perceive the shifts, like in a flipbook:

Humans perceive the world at about 40 frames per second. A standard movie has 24 frames per second; in other words, the motion is created by 24 still images flashing across every second. TV has 30 frames per second (for a movie to be shown on TV, they have to adjust the rate). And for some reason, Peter Jackson shot ‘The Hobbit’ in 48 frames per second. Opinion was divided on how successful this was. One article from Gizmodo described how the film is “an unexpected masterclass in why 48 fps fails”. Peter Jackson, however, argues that viewing the movie in 48 fps makes it easier to the human brain to see, comprehend and accept 3D footage.

Code

Computer programs are also represented in binary. Computers have 2 types of information — programs and data.

To represent a computer program, there has to be a way to have a table of commands that are represented in binary. Well, that’s called an ‘opcode’ — a binary representation of a command to a computer. Computer chips come with a set number of instructions (known as an instruction set — we will get into this elsewhere). For example, ‘add’ is one such instruction. If a computer sees 0001, it will add the 2 pieces of data that follow it.

And that’s it — it’s that simple (and that complex). The idea of storing computer programs within the same binary system that held the data was one of the most revolutionary concepts in computing. Computer programs are just ways of manipulating and changing this information.

And by the way…a computer does not have to use binary. It just happens to be the most efficient, fast and economical way to implement the principles of computing at the moment. Who knows if the future will hold something different?

--

--