Geroge Boole, Claude Shannon’s and Alan Turing’s role in Computing Evolution
And the fact that Logic is not Intelligence
I was trying to teach programming to my 11-year-old son and found that I had only a foggy idea how it evolved, especially the logic part and the exact roles of some of the people like George Boole, Claude Shannon, Alan Turing and Charles Babbage etc in the history. So I thought I will dig deep and write this note for myself, and it may be useful to others as well. Also as I started writing this, it seemed to link with the evolution of AI, as AI as a field started very early, we can say, heavily influenced by the possibility of programmable logic , that is with Claude Shannon’s paper - that Intelligence is something that could be defined by logical rules, and if so it could be implemented by computers. Though we have advanced a lot in efficiency from earlier computers or computing machines, we are as far away from an intelligent machine now, as then, in those early days, when it looked tantalizingly near. When geniuses like Alan Turing thought and wrote about ‘thinking machines’, and created the Turing test as a test for intelligence. I guess this then is the problem of AI as a field, that it captures the public imagination periodically and then lets it down. But this note is not for AI, but just a refresher walk into the past.
The first step — Mathematical formulation of Logic
We don’t need to go as back as the first mechanical computer Difference Engine of Charles Babbage, or the punch card-based stored operations driven Analytical Engine. It will suffice to start with Boolean logic as it is with the application of Boolean logic, that computers become more than automated (mechanical or electrically driven) calculating machines- more than calculators.
The story of modern computers is not of one discovery but a set of events that occurred from 1800’s (Charles Babbage era) to around the 1940’s.
What we have as modern systems evolved through many streams. There were concepts like input-output that was pioneered by Charles Babbage in his mechanical computing machine ( who got the idea from Jacquard loom design based punch cards), inventions like relays- basically electromagnet driven switches -to do automated switching of telegraph and later telephone switchboards; the invention of Boolean logic, where logic was first presented as mathematical equations, Claude Shannon’s realization of applying Boolean logic to simplify the same relays, Alan Turing’s theoretical concept of a computing machine — a symbol-manipulating machine-Turing machine, John von Neumann’s idea of a stored program machine, the building of maybe the first computer that is similar to current computers — the EDVAC, and the invention of vacuum tubes and then transistors to replace the electromagnetic switches used in relays- all the pieces finally fitting together to arrive at the modern computer that you are reading his on.
It’s a bit hard to piece together the exact flows [ref], as sometimes it is not clear the influence of one on the other. Though Charles Babbage and Boole were contemporaries and had met, the role of each is clear. Not so in the case the other three Shannon, Turing, and Neumman lived in the same period and have interacted with each other (ref   + many other articles), and it is not clear the influence of one on the other.
If you are interested in history a good read to start off would be this blog as unlike others it traces the path from electrical relays used in railroads, telegraph and telephone switching, and how it was introduced by Shannon to automate and streamline the first mechanical differential engines built at IBM and then the evolution finally into an algorithmic or general purpose computer of today.
The mechanical computer of Charles Babbage was a clever gearing mechanism to compute numbers, for example for navigational charts which were earlier done by ‘human computers’ (that seemed to be the term used) which was error prone.
Initial relay computers had concepts like input, output, and processing but were not logic based. There are a lot of branches in the evolution, but few ideas are central to the evolution of modern programmable computers than the concept of logic.
The first programmable computer was the Z1 by Konrad Zuse. It was the first one based on binary and used punched cards to drive. Here is a video snippet of that. The term program here is used as a means of changing the output by being able to give variable input, and not in the sense of logical programs.
There were other computers like Mark1, made during this time.
What they all lacked was the Boolean Logic part.
In this article, we are more interested in this part of computer history, and also the algorithmic or input instruction based, state changing design of Alan Turing in it.
Role of George Boole
Here is an excerpt from George Boole’s first book/monogram. The Mathematical Analysis of Logic, 1847
“On the principle of a true classification, we ought no longer to associate Logic and Metaphysics, but Logic and Mathematics. Should anyone after what has been said, entertain a doubt upon this point, I must refer him to the evidence which will be afforded in the following Essay. He will there see Logic resting like Geometry upon axiomatic truths, and its theorems constructed upon that general doctrine of symbols, which constitutes the foundation of the recognized Analysis” George Boole https://history-computer.com/Library/boole1.pdf
Basically, in this, he puts in algebraic form, logical expressions.
Further refined in his second book, where the boolean logic (true/false) that anyone connected with computer science can immediately appreciate: An Investigation of the Laws of Thought 1854
To express the Proposition, “The proposition X is true.”
x = 1
2nd. To express the Proposition, “The proposition X is false.”
x = 0.
3rd. To express the disjunctive Proposition, “Either the proposition X is true or the proposition Y is true;” it being thereby implied that the said propositions are mutually exclusive, that is to say, that one only of them is true
x(1 − y) + y(1 − x) = 1
This theory was picked up by Claude Shannon.
Role of Claude Shannon
“In his thesis, Shannon, a dual degree graduate of the University of Michigan, proved that Boolean algebra could be used to simplify the arrangement of the relays that were the building blocks of the electromechanical automatic telephone exchanges of the day. Shannon went on to prove that it should also be possible to use arrangements of relays to solve Boolean algebra problems
In the early 20th century, several electrical engineers intuitively recognized that Boolean algebra was analogous to the behavior of certain types of electrical circuits. Claude Shannon formally proved such behavior was logically equivalent to Boolean algebra in his 1937 master’s thesis, A Symbolic Analysis of Relay and Switching Circuits. (source wikipedia)
“After graduating in 1936, Shannon went directly to MIT to take up a work-study position he had seen advertised on a postcard tacked to a campus bulletin board. He was to spend half his time pursuing a master’s degree in electrical engineering and the other half working as a laboratory assistant for computer pioneer Vannevar Bush, MIT’s vice president and dean of engineering. Bush gave Shannon responsibility for the Differential Analyzer, an elaborate system of gears, pulleys and rods that took up most of a large room-and that was arguably the mightiest computing machine on the planet at the time…
Conceived by Bush and his students in the late 1920s, and completed in 1931, the Differential Analyzer was an analog computer. It didn’t represent mathematical variables with ones and zeroes, as digital computers do, but by a continuous range of values: the physical rotation of the rods. Shannon’s job was to help visiting scientists “program” their problems on the analyzer by rearranging the mechanical linkages between the rods so that their motions would correspond to the appropriate mathematical equations.
Shannon was especially drawn to the analyzer’s wonderfully complicated control circuit, which consisted of about a hundred “relays”-switches that could be automatically opened and closed by an electromagnet. But what particularly intrigued him was how closely the relays’ operation resembled the workings of symbolic logic, a subject he had just studied during his senior year at Michigan. Each switch was either closed or open-a choice that corresponded exactly to the binary choice in logic, where a statement was either true or false. Moreover, Shannon quickly realized that switches combined in circuits could carry out standard operations of symbolic logic. The analogy apparently had never been recognized before. So Shannon made it the subject of his master’s thesis -“A Symbolic Analysis of Relay and Switching Circuits,”
… toward the end, for example, Shannon pointed out that the logical values true and false could equally well be denoted by the numerical digits 1 and 0. This reali-zation meant that the relays could perform the then arcane operations of binary arithmetic. Thus, Shannon wrote, “it is possible to perform complex mathematical operations by means of relay circuits.” As an illustration, Shannon showed the design of a circuit that could add binary numbers.
Even more importantly, Shannon realized that such a circuit could also make comparisons. He saw the possibility of a device that could take alternative courses of action according to circumstances-as in, “if the number X equals the number Y, then do operation A.” Shannon gave a simple illustration of this possibility in his thesis by showing how relay switches could be arranged to produce a lock that opened if and only if a series of buttons was pressed in the proper order.
The implications were profound: a switching circuit could decide-an ability that had once seemed unique to living beings.
Shannon found that one could directly translate a system of equations of propositional logic into a physical circuit of relay switches, through a rote procedure. “In fact,” he concluded, “any operation that can be completely described in a finite number of steps using the words if, or, and, etc. can be done automatically with relays.” — https://technicshistory.wordpress.com/2017/05/10/lost-generation-the-relay-computers/#fn-6571-12
Shannon’s work was used widely in the relay based telephone exchanges and railway track selection relay circuits.
Role of Alan Turing
In contrast to Shannon’s paper, Alan Turing’s paper ( “On Computable Numbers, With an Application to the Entscheidungsproblem 1936)is highly technical. Its primary historical significance lies not in its answer to the decision problem, but in the template for computer design, it provided along the way.
In Turing’s paper he describes a simple machine that goes over a tape of symbols and is able to change the symbols .. and then basically tells that such a machine will be able to compute any computable number -”It is my contention that these operations include all those which are used in the computation of a number”. This is the Turing Machine.
While Shannon showed how to map logic onto the physical world, Turing showed how to design computers in the language of mathematical logic -https://www.theatlantic.com/technology/archive/2017/03/aristotle-computer/518697/
Later on, John Von Neuman took, or maybe worked on this Universal Computing machine idea and himself a great mathematical and physicist, developed the Jon Von Neumann architecture of a computer the stored-program computer, which resulted in the build-out of maybe the first ‘modern’ computer- EDVAC
This is in short the history I could trace to, after consulting many articles, blogs, and Wikipedia. I am not sure if all the above is accurate, or if I have missed out certain people or events; history seems as hard to tell as the future sometimes; sometimes due to less information, but in this case, due to too many sources and the thread of events is not clear.
A short Epilogue
Artificial Intelligence has meant different things at different times of our brief computer history. Basically whatever was thought to be done only by humans when computers start to do, then it is thought as Artifical Intelligence for some time, till the understanding percolates to the masses and it is no longer thought so. A good definition of AI is AI is something that does not seem possible to automate and then when it becomes possible, is no longer AI.
All that we do with computers fit into Boole ’s, Shannon, Turing definition of a computer and there is no change in this from the older less efficient machines to the current ones; and how much ever deeper your network is or how much ever good it gets detecting objects or translating speech, it seems just to follow boolean logic, and at some level be limited by it ? AI- or thinking machine which seemed so close with the advent of modern computers is as elusive now as it was then; one has to only follow the repeated wax and wane of AI.If you want to puzzle on this, there are a whole lot [1 ..n] of articles and thoughts on this. Thought seems an endless loop, and it seems almost impossible to understand the process from within.
Here is a beautiful video I re-found