10 Things You Should Know About Computation and Artificial Intelligence

Emilia Matthews
Warwick Artificial Intelligence
9 min readJan 17, 2022

A talk by Felix Hovsepian PhD, FIMA

See more from Felix here: https://www.linkedin.com/in/felixhovsepian/

Felix Hovsepian is a distinguished Mathematician and the founder of the consulting firm Blue Manifold. Felix studied Pure Mathematics at Warwick University before returning 10 years later to do a PhD and two postdocs in Computer Science, primarily based on machine reasoning. His main area of research is nature-inspired computation, especially swarm intelligence — the collective behaviour which results from local interactions of individuals.

Warwick AI was lucky enough to host a unique talk by Felix, entitled “10 Things You Should Know About Computation and Artificial Intelligence”.

The following is a breakdown of those 10 lessons.

  1. Who started all this?

In 1672, Leibniz invented the Stepped Reckoner, the world’s first machine capable of all four arithmetical operations. The core mechanism, the Leibniz wheel was used for 200 years. Building this machine prompted Leibniz to contemplate reducing reason or thought to calculations which can be carried out by a machine. In 1688, he wrote a short document called “Fundamenta calculi ratiocinatoris” in which he talks about the machine that would reason symbolically through some abstract language. This underpins the idea of AI: a formal language (“Characteristica Universalis”) together with a reasoning mechanism (“Calculus Ratiocinator”).

Leibniz’s favourite equation: used by Turing to show why Pi is computable

What is the Entscheidungsproblem (decision problem)?

This is a challenge which was posed by Hilbert and Ackermann in 1928. It asks for an algorithm such that given an input, it will answer “Yes” or “No” according to whether the statement is universally valid — provable from the axioms using the rules of logic.

2. Response of Gödel, Church, Turing and Post to the problem

Kurt Gödel (1931) — “On Formally Undecidable Propositions of Principia Mathematica and Related Systems” [1]

Alan Turing (1936) — “On computable numbers with an application to the Entscheidungsproblem” [2]

Alonzo Church (1936) — “An Unsolvable Problem of Elementary Number Theory” [3]

Emil Post (1936) — “Finite Combinatory Process — Formulation 1” [4]

“Gödel showed that you can’t boil maths down to logic, maths can’t be mechanised”

All four of these famous papers agree that the Entscheidungsproblem is unsolvable. However, Church’s paper gave us lambda calculus which is very important for theoretical computer science. Equally important is the idea of recursive functions introduced here by Godel and of course Turing’s original abstract model of the human-computer, the Turing Machine. (Post does not get mentioned as often but his solution was in fact very close to Turing’s).

3. The Turing Machine

The Turing Machine can be described as a head moving along an infinitely long strip of tape. The head can perform three operations: read, write, move left or right. Despite its simplicity, this machine can simulate absolutely any computer algorithm. [5]

A good way of visualising this is with a book of instructions where each instruction is followed by a change to the machine’s “state of mind”. The combination of state and the symbol which the computer is currently “observing” determines the behaviour of the computer at any one time.

From his machine Turing was able to show the Entscheidungsproblem cannot be solved, as well as pioneering the concept of an algorithm.

Mathematical models of Computation

Computation is a chain of steps that follow a well-defined model (algorithm), whereas a calculation simply transforms input to output without such a model. Most people think of imperative or object-oriented languages like Python when they think of models of computation but in fact, there are many more. Here are some typical ones:

Models of computation

Note: Concurrency is not the same as parallelism. It means executing multiple tasks at the same time but not necessarily simultaneously. Meanwhile, parallelism means running tasks or subtasks simultaneously, for instance on multiple CPUs.

4. Claude Shannon — The Father of Information Theory

In 1936, Shannon graduated with a degree in Electrical Engineering and one in Mathematics. In 1937, he wrote his master’s thesis, “A symbolic Analysis of Relay and Switching Circuits”[6] which is considered one of the best of all time because it solves a problem Bell Labs had with getting phone conversations across America. From here, his mentor Vannevar Bush suggested Shannon research the mathematical formulation for Mendelian genetics and sure enough, in 1940, Shannon produced a PhD thesis, “An Algebra for Theoretical Genetics”. [7]

The beginnings of the field called Information theory lies in 1948 when Shannon produced a memorandum called “A Mathematical Theory of Communication”[8] in which he talks about “Information entropy” as a measure of the uncertainty in a message. In 1956, Shannon left the field of Information theory after complaining about the hype going on with it which is much like the hype with AI today. [9]

5. Turing’s child machine

“You can have a machine that changes its own code while its running, and if you program in lisp, you can actually do that and watch your own code be modified… it’s absolutely beautiful”

In 1948, Turing came to the profound realisation that there is no difference between code and data. He began thinking about systems that could modify their own programs, nets of logical components which could be trained to ‘learn’ a certain function like a child. In essence, Turing was predicting neural networks, albeit he was thinking in terms of topology and geometry whereas today’s neural networks are more hierarchal and based on algebra. Turing also discovered that learning emerges when two forces act on a system: “reward” and “punishment” and in 1952, built a mathematical model that shows this formally. [10]

At this point, he was essentially predicting today’s reinforcement learning techniques, further illustrating just how forward-thinking and influential his work was. To top this off, Turing collaborated with David Champernowne to write a chess-playing program called “Turochamp”. It was made for a computer which did not exist at the time and yet when tested on modern machines is still a very good program.

6. von Neumann- “Self-replicating machines”

The well-known description of a self-reproducing system in Schrödinger’s book “What is life?” (1944) was pointed out as incorrect by Sydney Brenner. He spotted that it was von Neumann who presented an accurate description of the self-replicating machine at the 1948 Higgs and Symposium conference, which turned out to be the right mechanism for DNA replication too. This was 10 years before Watson and Crick found out about how DNA replication works in the fields of Physics and Biology and Biologists still talk about Schrödinger, instead of von Neumann’s papers.

Turing’s most mentioned paper is “Computing Machinery & Intelligence”, published in 1950. Here he discussed how most programs will result in errors or something we cannot make sense out of. This is anticipatory of today’s papers on the issue of making ethical AI that is robust and can explain its decisions.

“That’s the nature of the beast…if you want intelligence, you have to accept that at times it’s going to go wrong”

After spending 2 months with Turing in 1943, Shannon gets interested in Machine learning. In 1952 he builds “Theseus”, a mouse that solves a maze using telephone relays as a ‘brain’.

In 1955, Allen Newell, Cliff Simon and Herbert Shaw develop “The Logic Theorist” the first-ever program to “mimic human reasoning”, capable of proving 38 out of 52 of the theorems in chapter two of the famous book, Principia Mathematica. [11]

It was in the 1955 Dartmouth Conference Proposal [12] that the phrase “Artificial Intelligence” first appeared where it was described as follows:

“An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves”

1956 Dartmouth conference- The Founding Fathers of AI

7. Ray Solomonoff — Algorithmic Information Theory

Solomonoff founded Algorithmic Information Theory and pioneered Kolmogorov Complexity. In 1956, he also wrote the first-ever papers on probabilistic machine learning entitled “An Inductive Inference Machine” [13], which he was handing out during the Dartmouth Conference.

In 1959, Arthur Samuel defined Machine Learning as “the field of study that gives computers the ability to learn without being explicitly programmed”. Then in 1961, Samuel made a checkers playing program that beat the 4th best-ranked checkers player in the US. His research on non-numerical computation was the first of its kind and later was used by the people designing operating systems for IBM.

8. The definition of AI

In 1973, during “The Lighthill debate on AI”, John McCarthy defines AI as “a science, it is the study of problem-solving & goal-achieving processes in complex situations” and specifies that it is distinct from cognitive science. He had previously defined it as “the science and engineering of making intelligent machines” [14] in the Dartmouth Conference of 1956.

More recently, in 2019, Dr Eric Horvitz (Chief Scientist, Microsoft) defined AI as “the scientific study of the computational principles behind thought, and intelligent behaviour”. This may be considered a ‘circular’ definition of AI, but information about the 4 pillars of AI makes it more useful. Horvitz defined these as follows:

  • Perception
  • Learning
  • Reasoning
  • Natural Language

“If you want an AI system, the system must have capacity to learn things, perceive things, reason and communicate via the use of natural languages”

9. AI and Warwick

Sir Christopher Zeeman

At Warwick, we have a building named in honour of Sir Christopher Zeeman, Founding Professor of Mathematics at the University of Warwick and both an incredible mathematician and teacher. Interestingly he was mentored by Shaun Wylie, one of the men behind “Machiavelli” the opposition to Turing’s chess-playing program “Turochamp”.

The Ratio Club

The Ratio Club (1949–1955) was a group of young academics who came to discuss Cybernetics — the science concerned with the study of systems and how they interact with each other. This club included Turing and Donald McKay, father of Robert MacKay who is a Professor of Mathematics at Warwick.

Prof. David M.R. Park

Prof. David M.R. Park who was one of the earliest members of the computer science department at Warwick joined McCarthy’s Lisp group in 1959, where he made significant contributions towards the theory of Lisp (second-oldest high-level programming language after Fortran) and its implementation.

“He was the one of the meanest programmers I’ve ever met in my life…he would design languages from a theoretical perspective, he would do the domain semantics, and then implement it.”

If you are interested in event-driven or reactive programming, read the book “The Power of Events” [15] written by Prof. David Luckham, one of the people Prof. David M.R. Park worked with.

“It’s years ahead of what you’ll find in reactive libraries.”

10. Leslie Valiant (supervised by Warwick Professor, Mike Paterson)

In 2012, Leslie makes a point about how difficult it is to make an intelligent machine with the ability to observe, communicate in natural languages, behave autonomously is very difficult — “You have to study AI for 40 years to understand that one, it’s very deep”.

Note: Anyone interested in deep learning should consider looking at Ian Goodfellow’s work.

From this talk, we have learnt a lot of interesting facts about the history and foundations of AI that may help shape our perspective when looking to the future of AI.

A recording of his talk can be found on YouTube —

References/Resources

  1. https://monoskop.org/images/9/93/Kurt_Gödel_On_Formally_Undecidable_Propositions_of_Principia_Mathematica_and_Related_Systems_1992.pdf
  2. https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf
  3. https://www.ics.uci.edu/~lopes/teaching/inf212W12/readings/church.pdf
  4. https://www.wolframscience.com/prizes/tm23/images/Post.pdf
  5. https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/turing-machine/one.html
  6. https://www.cs.virginia.edu/~evans/greatworks/shannon38.pdf
  7. https://dspace.mit.edu/handle/1721.1/11174
  8. https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf
  9. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1056774
  10. https://www.dna.caltech.edu/courses/cs191/paperscs191/turing.pdf
  11. https://lesharmoniesdelesprit.files.wordpress.com/2015/11/whiteheadrussell-principiamathematicavolumei.pdf
  12. http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf
  13. http://www.raysolomonoff.com/publications/An%20Inductive%20Inference%20Machine1957.pdf
  14. https://www.sciencedaily.com/terms/artificial_intelligence.htm
  15. https://sisis.rz.htw-berlin.de/inh2012/12402838.pdf

--

--