Explaining Binary

Sololearn
Sololearn
Published in
5 min readJan 17, 2022
Binary

It’s amazing to think today, in the world of smartphones and cloud servicing and augmented reality, that the modern Internet we rely on to run the world all started with some simple strings of 1’s and 0’s. But binary, as this simplified and basic system is known, was the predecessor that allowed all of modern technology today to eventually evolve. From powering the code breaking machines like MK Ultra during World War 2, to allowing the first room-sized supercomputers to function and perform the calculations needed to send men to the moon, the accomplishments of even a simplified system like binary are truly remarkable.

While many modern coding classes focus specifically on in-demand languages such as Python or Golang, knowing the basics of binary and how it functions is almost like learning your letters and numbers before learning how to write an essay or perform a complex math calculation. This is because “under the hood” of the fancy modern hardware and programming languages used to build modern software and applications, the actual functionality of taking a picture with a digital camera or retrieving information from a database would still appear as endless streams of binary digits (bits), such as: 01100100100010110101101010 (and so on).

So before you finish up your current coding classes and begin designing the world’s next big social media application or new innovation in machine learning, it’s worth pausing and taking some time to ensure you understand what binary is and how it works. Let’s take a brief spin around the basics of binary, and how it underpins the more complicated functionality that most people think of when they think of the modern Internet and technology.

How Did Binary Originate?

Back in the 1940s and 1950s, responding to the technological and intellectual challenges of World War 2 and the Space Race, early computer technicians all worked at the bit level. If a computer made a mistake and the technician evaluated the system and determined it wasn’t the result of a burned-out vacuum tube, he likely would have simply replaced a 0 with a 1 somewhere and tried again.

In everyday life, most humans employ the decimal (also known as the base-10) system. The reason for this is not that surprising — humans have ten fingers and ten toes, and it was only natural that this is the easiest system for us to use to count and perform basic arithmetic. If we instead had 16 fingers, we might employ a base-16 system instead. In fact, IBM created just such a system in the early 1960s, employing the “numerals” 0 through 9 and A through F. This system, known as “Hexadecimal”, became the foundational notation for the IBM System/360 mainframes. The reason was that a programmer could easily remember and work with a 16-bit instruction when it was represented by four hex characters, such as 58F0 (a “load” instruction), than if it were 0101 1000 1111 0000. However, at their lowest levels, even these hexadecimal (and other) systems stored and functioned based on binary numbers.

Why Is Binary So Useful?

The basic binary concept, which follows the principle that something is either on or off, offers some structural advantages that make logic (even complicated logic) much easier to create. It has its roots in the work of a British mathematician named Charles Boole, who published a system of logic that served as the predecessor to the logic that powers all computer hardware and software. Boole’s system included the basic operations, AND, OR and NOT, which could form simple statements with a binary property (i.e. they were either true or false). Additionally, these elements could be combined and stacked into the most complex of logical constructs.

The AND, OR and NOT operations can be easily integrated into hardware in the form of “gates.” A NOT gate (known as an inverter), for example, takes as input a 1 (or 0) and then produces the opposite output 0 (or 1). Similarly, an OR gate outputs a 1 if either or both of two inputs is 1, and it outputs a 0 if neither input is 1. These gates can then be combined in various ways to make arithmetic units that add, subtract, multiply and divide. Once you have combined enough gates, you have the basic logic behind a computer. Boolean gates were implemented in the earliest computers with electromechanical relays (physical switches that were either on or off), and later in modern machines with transistors, tiny switches that also follow a binary, on/off property.

Building off of the work of Boole, Claude Shannon in 1937 showed how Boolean logic, in which problems can be solved by simply manipulating just two symbols, 1 and 0, could subsequently be carried out automatically with electrical switching circuits. In 1948, Shannon, who would later be known as the father of information theory, demonstrated that all information could be represented by zeros and ones, which was groundbreaking at the time and led to the rapid innovation and expansion of the computing world which would later power the Mercury, Gemini, and Apollo programs, and eventually lead to the world of home computing developed by Bill Gates and Steve Jobs, among others.

The Nitty Gritty Of How Binary Functions

In comparison to a decimal system, which includes 10 digits with each digit position representing a power of 10 (such as 100 or 1,000, and so on), in a binary system each digit position represents a power of 2 (4, 8, 16, and so on). A binary code signal is actually a series of electrical pulses that represent the individual numbers, characters, and operations to be performed. A device known as a clock transmits regular pulses, and then components known as transistors (the ones mentioned above) switch on (1) or off (0) to actually pass or block the pulses.

In the case of binary code, every decimal number (0–9) is actually represented by a set of four binary digits, which are referred to as bits. As a result, the four foundational arithmetic operations (addition, subtraction, multiplication, and division) can then be reduced to combinations of fundamental Boolean algebraic operations on binary numbers. The table below provides a representation of what this conversion looks like.

**chart credit = Encyclopedia Britannica**

--

--

Sololearn
Sololearn

The #1 platform to learn to code · Available anytime & anywhere for free · Join our community of millions of learners worldwide where no questions go unanswered