Logic

Robert Mundinger
CodeParticles
Published in
5 min readOct 22, 2017

I think, therefore I am.
— Rene Descartes

Logic is the study of truth.

It is the question of what you can know (with certainty), based on other information:

Is x true, given a & b?

Here we are dealing with variables (a and b) and output (x). We have TRUE and FALSE as the answer to our equation.

Unlike typical mathematical equations involving variables that would give a value anywhere from negative infinity to infinity (for instance, E=mc², if m=2000 grams, then E = 179751035747363528 Joules), logic deals only in 2 values, true and false. We are dealing with what is and what isn’t. It’s binary. The only answer on the right side of the equation is TRUE or FALSE.

Take a syllogism, where we have two statements that are assumed to be true that are followed by a conclusion that is either true or false.

All men are mortal.

Socrates is a man.

Therefore Socrates is mortal.

These arguments form the basis of deductive reasoning where truth is determined using objective facts.

Humans are not terribly logical most of the time. Read the comments section of any major publication and you’ll quickly see we are subject to horrific fallacies and believe logical paradoxes. As David Hume famously said “reason is a slave to the passions.” Our emotions are a far greater variable in our action than reason.

Gottfried Wilhelm Leibniz

Gottfried Wilhelm Leibniz (who invented calculus at the same time that Sir Isaac Newton) had ideas about reducing the workings of the human mind into simple parts, like an alphabet of human thought.

It is obvious that if we could find characters or signs suited for expressing all our thoughts as clearly and as exactly as arithmetic expresses numbers or geometry expresses lines, we could do in all matters insofar as they are subject to reasoning all that we can do in arithmetic and geometry. For all investigations which depend on reasoning would be carried out by transposing these characters and by a species of calculus. Gottfried Wilhelm Leibniz, Preface to the General Science, 1677

He outlined his characteristica universalis, an artificial language in which grammatical and logical structure would coincide, which would allow reasoning to be reduced to calculation.

Boolean Algebra

Building on the ideas of Newton and Leibniz, in the 19th century, George Boole sought to describe the laws of logic and thought through mathematics — a way to model human reasoning.

Boole argued that just as we can model the movement of a feather falling from the Leaning Tower of Pisa, so can we model the inner workings of the human mind. And if we can model thought through mathematical equations we can build a machine to replicate it…a reasoning machine.

Those mathematical equations were called Boolean Algebra.

Boole published these ideas in book called The Laws of Thought in 1854. Our output is always either true or false and our functions or inputs are not operations like multiplication and addition, but set operations like union and intersection.

intersection and union

“A and B” is true only if A is true and B is true. “A or B” is true if A is true, or if B is true, or if both A and B are true.

This can also be represented in truth tables

For example: “If you want my body ‘AND’ you think I’m sexy”— given both conditions are true, then come on baby let me know. And example using an OR statement “If you want my body ‘OR’ you think I’m sexy”, only one of the conditions is necessary for the value to be TRUE.

Computing

These ideas were in the ether for about the next 70 years or so, until one of the great minds of the 20th century found an application for them. In 1937, 21 year old Claude Shannon wrote a Master’s Thesis at MIT called A Symbolic Analysis of Relay and Switching Circuits. In it, he described how a system using electric relays could be set up to mechanically carry out the operations of Boolean Algebra. This was a revolutionary concept that led the way toward the computer age that completely changed the world. Several groups would implement these ideas over the next 10 years, throughout the course of World War II.

Universal Computer

One of these groups was led by another one of the other godfathers of the digital age, John Von Neumann. He took these principles and implemented them in a machine called the ENIAC.

A computer must consist of memory, input/output, and a central processing unit. This is a fundamental rule that still underpins all computers today, which we call the Von Neumann architecture.

These principals are the foundation of computing, and they haven’t changed.

As the English mathematician and computer scientist Alan Turing said, any computer can be programmed to act like any other computer is called Turing Complete. Basically, this means no computer can do anything any other can’t. Some are faster, have more memory, storage, more intuitive input and output, but a computer from 1945 could do anything computers can today. It would just take far longer.

There is an important point to make here. The structure of computers has nothing to do with electronics. We currently use electronics to implement the principles of computer science, but that is not necessary. They don’t have to be binary, they don’t have to be electronic. Many of those in the past weren’t and likely those in the future will not be — instead, they might be biological or quantum.The possibilities are potentially limitless, and this is an exciting thought.

--

--