Biology, Psychology, Math: AI Broad or AI Narrow

Geoffrey Gordon Ashbrook
42 min readOct 21, 2023

--

Claude Shannon [Information Theory], John McCarthy [LISP], Ed Fredkin and Joseph Weizenbaum [ELIZA] (1966)

STEM integration, language, and non-monophyletic trees

2023.10.06,15,21 G.G.Ashbrook

Assignment One:

Brainstorm: What are the top 5–10 items that you associate with Artificial Intelligence, and ask a friend or colleague what they think for the same topic.

Planning AI: History, Amnesia, Priorities, & Distractions

Reality vs. Attraction: AI, STEM & Pathology

Sciences Hard and Soft: AI & Distraction

List 1:

AI:

- Biology

- Math & Logic

- Psychology Etc.

Projects:

- Value

- Function

- Meaning

- Progress

vs.

List 2:

AI:

- horror

- shock

- popularity

- winning

- clicks

- wealth

- easy money

- bling

- fame

- violence

- entertainment

Themes in Collapse & Disinformation:

- amnesia

- bad choices

- failed processes

- failed communication

AI broad or AI Narrow

Do we see AI (Artificial Intelligence, Machine Learning, Data Science) AI-ML-DS as a narrow offshoot of applied computer science, where chiefly one particular deep-learning technology is predominantly used by industry and ‘hyped’ by media?

Do we see AI as a broad intersection between many STEM (and perhaps non-STEM) areas that have grown together over time, including being ‘non-mono-phyletic’ meaning either it is one thing with multiple origins, or it is a collection of things with separate origins.

How are we using the terms ‘broad’ and ‘narrow’ in this context?

“Broad and Narrow” can be an especially important set of terms for how AI is described and considered before and after 2022 and the rise and spread of LLM GPT ANN deep learning models. From the status quo period from 2012–2022 people had gotten used to a ‘normal’ world where AI could only be conceived of as ‘narrow.’ ‘Broad’ or ‘general’ (not very well defined terms, more as arm-wavy gestures) AI was considered either only possible in an alien distant future or impossible in principle.

But here broad and narrow are being used in a different context, not to describe the ability of a specific AI technology, but looking at the AIML field overall: do we take a narrow view (“It’s just calculus. It has nothing to do with biology.”) or a less narrow view of where AI has come from and whether it is interdisciplinary or not (whether or not we apply AI to a task or project that might be ‘broad’ or ‘narrow’ in other uses of those terms).

How Deeply Do you look? A chicken and Egg Problem of Terminology

Why were Alan Turing, Claud Shannon, and John von Neumann, founders of not only computer science but AI specifically, “mathematicians” and not, well, computer scientists?

By analogy, if you are going to study tea, and what tea is, and connections to and from tea, what do you do when the first recorded mention of the use of the word “tea” is probably in the late 1600’s (difficult to pin down for some reason), yet obviously tea (even if referring strictly to tea made only from the Camellia Sinensis leaf) is an international subject that extends through time and languages beyond 1600’s England.

There is also a kind of chicken and egg problem, when trying to understand something for which the vocabulary we use now arose after the events you are looking at. In a number of STEM areas, and around the same time in the 1800’s Gregor Mendal, George Boole, Charles Darwin, and Charles Lyell, were all developing ideas that we now describe using terms that did not exist when they were doing their work.

The same is maybe the case with trying to understand developments such as the ‘invention’ or ‘discovery’ of algebra (see: Ian Stewart’s ‘Significant Figures’ https://www.amazon.com/Significant-Figures-Lives-Great-Mathematicians/dp/0465096123 ) where different parts were added by different people in different places and times, making the whole story difficult to clearly describe using the same vocabulary across the whole story.

AI is particularly tricky for many of these compounding reasons and more, including the secrecy and confusion generated by WWII, where parts of the story are still considered ‘national security’ top secret, as well as cultural biases such as the gender biases making the early women programmers and Alan Turing himself outside of what could be discussed. E.g. Klára Dán von Neumann, John von Neumann’s wife, was a great mathematician and directly involved in early computer and software development, yet is not mentioned at all in books attempting to focus entirely on the forgotten ENIAC programmer women…not to lay blame but to describe the deeply obscure and confused state of the history of computers. Klara dan von Neumann even wrote her own memoir, “A Grasshopper in Very Tall Grass” but to this day no one will publish it, even electronically. And these are the most famous contributors that we know that we do not know enough about, let alone the many others who we know even less about.

As a last note on the difficult semantics of the history of AI, there is an overwhelming amount of indirect evidence all pointing to the idea that George Bernard Shaw’s play “Pygmalion” was actually, overtly, a science fiction story about the future relationship between humans and the robot-AI servants they would create (along with parallels with social classes among people, gender, etc.).

This is so clear now, that Joseph Weizenbaum named the program ‘ELIZA’ after the ‘living statue’ mythical referenced character Eliza Doolittle in George Bernard Shaw’s Pygmalion, [a play] which was directly named after the greek myth about humans creating ‘artificial’ humans and then actually marrying and having children with them. (Also, Shaw’s later play ‘Back to Methuselah’ even more overtly describes humans creating AI-robots, and uses the same term “Pygmalion” as the name of the artist-scientist who creates the ‘artificial’ people. And, according to Hodges, Shaw’s ‘Back to Methuselah’ had a lasting impact on Alan M. Turing.)

But at the time the play ‘Pygmalion’ was written, none of these descriptive terms and concepts that we use now existed, and officially we still see “Pygmalion” or “My Fair Lady” as being ‘just about people,’ and you don’t see it in the science fiction section of the book store. So it is an interesting example of one set of ideas caught between the lexicons of two different times (or across several times).

Roots & Trunks

Even at the first official AI event in 1956, there were a number of areas that could have been, or could be, more parallel than convergent:

https://en.wikipedia.org/wiki/Dartmouth_workshop

From the proposal: Machine Tasks they tried to figure out included:

- “use language,”

- “form abstractions and concepts,”

- “solve kinds of problems now reserved for humans,”

- “improve themselves”

And from wikipedia’s summary: research topics discussed in the proposal included:

- computers

- natural language processing

- neural networks

- theory of computation

- abstraction and creativity

Note, ‘computer vision’ was not included.

Parallel vs. Overlapping:

Both in 1955 when the proposal was made and in 2023, and so perhaps even more in 1955, it is unlikely that they were thinking of one single computer-robot-architecture and tech-stack of OS and software that would be used for all areas of study.

It may also be worth noting that to some extent, if not clear linguistically, some of the descriptive topics from 1955 have since been dropped from common discussion and descriptive language even if they have not been actually dropped from project tasks. And other 1950’s goals may have been dropped from both (later descriptions and later goals).

“Form abstractions and concepts” are probably topics that people would be much more shy about talking about now, but this is partially semantics and cultural posturing. ‘concepts’ are the embedding-vector deep learning systems that have been successful, but this leads to much of the confusion around narrow deep learning vs. the still not well understood large language model success. Before 2023, and likely after as well, there was a concerted effort to not ‘over promise or over-reach’ to avoid major pull-backs from research and funding (“AI Winters” as they are called), and to focus on narrow concrete tasks such as image classification.

“Abstraction and creativity” may be an area where researchers and companies are not keen to advertise stimulating creativity (though after 2023 that may have changed compared with before 2022).

This ‘walking on eggshells’ sensitivity to how we speak about AI might be entangled with a narrowing of how we think about AI as well. There are probably many factors involved in changes over time from 1923 to 2023.

Another example of AI having more than one ‘root’ is where in the history of perceptions they describe the perceptron as originally not being thought of as a computer but as a machine. Because most computer-machines today are at their core a digital computer we may find it hard to shift our thinking back to when computers or digital computers were just one type of machine. This is also part of understanding the works of Norbert Wiener, and early science fiction.

The biography of Claud Shannon, Mind At Play, does a rather excellent job of showing what different technologies existed for doing different tasks before the creation of digital technology which ended up replacing many things.

Parallel Roots & Trunks in the non-monophyletic tree of AI

Assignment 2: Pick areas to map out

Here is an (incomplete and imperfect, sorry) outline of different areas and disciplines that have contributed to, and converted into, AI.

Engineering & STEM:

- Calculating Machines

Statistics vs. Maths:

- Bayesian Machine Learning in 1718

linguistics and cryptography:

- codes

- types

- information-entropy

math:

- math and computer science

- Most of the founders were, by label, Mathematicians. In this context we have seen AI grow out of ‘math.’

math and logic:

- Hilbert

- Russell & Whitehead

- Godel

- von Neumann

- Turing

perception:

- 1943 by Warren McCulloch and Walter Pitts

learning:

- statistical

- tree

- heuristic

- ‘sub-symbolic’

psychology:

- James

- Freud

- Jung

- etc.

- cognitive psychology: Khanaman Tversky

neurobiology:

- perception

- memory

- plasticity

general biology:

- von Neuman

- genetics & DNA

- Both Turing and Von Neuman (and in a less clear way Shannon(e.g. genetics) made detours into Biology on their way to AI)

Economics:

- Game Theory

computer science:

- Church vs. Turing for digital

-

Cybernetics vs. Digital Computer Science

- Norbert Wiener

Linguistics vs. Natural Language Processing

Math vs. Computer Science vs. Linguistics: Hilbert & STEM

Biology:

- Genetics & Genetic Algorithms: Learning

- The hypothetico-deductive method: learning, testing, perception

Codes, Information-Warfare, Disinformation, and Cybersecurity

Automata:

- John von Neumann

- John Conway

- Stephen Wolfram

Robotics:

- irobot

- https://en.wikipedia.org/wiki/Rodney_Brooks

Extended STEM

AI, and bringing computer science more deeply into STEM, may help to generally extend STEM into areas important for society, life, and productivity. While we cannot now say what the future will require, we can speculate about how to fill in a more comprehensive set of STEM related areas for managing projects:

- Definition Behavior Studies

- Generalized STEM

- Intersecting Areas

- Project Context

- Coordinated Decisions

- Object Relationship Spaces

etc.

Math and AI

The story of AI can begin with a math problem that we are still trying to solve. This is not a ‘problem’ in the sense of 1+1=2' but more a problem like trying to do algebra with roman numbers: Hilbert’s Problems, though I may be overly-liberal in describing them as the ‘problem’ of math needing to be carefully formalized and integrated with (or at least to be able to be integrated with) STEM. We think of math, or like to think of math, or want to think of math, as rules and consistency and repeatability and clarity and translatability, but History is plagued by mathematicians and logicians being cryptic and vague and secretive and incomplete and falling much more into gang violence than, well, than math.

A wonderful portrait in miniature for the overall scenario, also with an AI connection, is one of Issac Azimov’s iRobot humanity and AI stories about the ‘telepathic robot’ where one of the main characters is a mathematician who gets so lost in politics and intrigue and rank and career advancement and personal pride, and the authority-ness and rank-ness, of who says what and who listens to who, and worrying about his stupid white silk gloves, that he is never (by the end of the story, and the demise of the robot-ai in question) able to determine whether the AI is able to do a discrete math problem or not.

Numbers that Learn:

I have never seen a history of computation or AI that starts with bayes, or even mentions bayes.

Read: https://www.amazon.com/Theory-That-Would-Not-Die/dp/0300188226

Two Vectors in AI Problem Space

A possibly useful and simple way to plan and discuss what project-task within and overall project or system or ai or ai-architecture is to look at two contexts, each of which can be looked at as a spectrum:

1. analytical vs. non-analytical

2. ascii vs. media

Even aside from project-task discussions and sticking only with ‘traditional’ single-function at tools, the different individual tasks may fall in interestingly different regions of this “chart” which might help to shed light on whether the right tool is being used for the right job.

More broadly this may be part of the debate over what exactly to include in AI, with some people wishing to use (or also wishing everyone else to use) the term in a very specific and narrower way (such as only for a subset of deep learning models), whereas other people use the term more broadly as in AI-ML including all known types of models associated with some kind of ‘training’ or fitting. Then again, ‘models’ does not seem to fit at all with many ‘symbolic AI’ systems (e.g. neither ELIAZA nor Mycin ‘learned’ or ‘trained’), which for perhaps most of the history of AI were considered the only real approach to AI.

Swap, Cut, AI

2023.10.19 Geoffrey Gordon Ashbrook

1. Swapping

“Easy things are hard.” -> ‘Problem 12= Flip 1 and 2 in “1+2=3” in this sentence.’

When this problem is made more general, such that any number (or characters in any character set) might be involved, and any sentence structure and presentation may be used.

e.g.

‘Problem 12= Flip 1 and 2 in 1+2=3 in this sentence.’

‘Problem 12: “Flip 1 and 2 in “1+2=3” in this sentence.”’

‘Problem 12: Flip 1 and 2 in one+two=three in this sentence.’

‘Problem 34= Flip 3 and 4 in 3+4=7 in this sentence.’

‘Problem 34= In this sentence, flip 3 and 4 in 3+4=7.’

‘Flip 3 and 4 in 3+4=7 in this sentence, Problem 34.’

‘In 3+4=7, flip 3 and 4 in this sentence as problem 34.’

etc.

Possible definition of concept-using ‘AI’ in computer science

Can we say that:

- For this class of problem, the task is not possible across a normal spectrum of cases and edge cases (e.g. only hard-wired or ‘structured input’ cases can be computed) without “structured input,” such as:

1. specifying orders, delimiter and forms

2. hard coding a solution or process for a specific single case

2. Cutup

Let’s take this one step further and make this a cut-up problem, where various (perhaps randomly divided) parts of this problem are given to different participants. What ‘system’ will be then able to do the task?

3. Define “Understanding”

This may be a case study in ‘easy things are hard’ and looking at ‘natural’ vs. ‘artificial’ processes in a world where that line has become substantially blurred compared with earlier times (e.g. a world without any automated math or known boolean systems etc.)

‘Problem 12= Flip 1 and 2 in “1+2=3” in this sentence.’ is very fast and easy for most people (possibly even birds and dogs), and LLM GPT AI can likely do this, but how about ‘non-AI’ solutions?

Can the context of object-relationships help here?

Is part of this that the item is a mix of data and instructions but not in a static format? What lexicon can help here?

How many such classes of problems, perhaps common in project-tasks, exist where there are perhaps definition-issues that make the problems intractable in certain ‘analytical’ modes. How does our vocabulary need to improve? Could this type of problem be a basis to STEM-define what can be meant by ‘understand’ such that the problem is not solve-able without “understanding”?

An Empirical Comparison of Model Types: Understanding vs. System-1 vs. non-analytic vs. sub-symbolic

Given a dataset (presumably in one language, though perhaps not), how would different types of models perform? Would any model aside from a huge LLM-GPT be able to do reasonably well? Can any reGeX-alone system do well?

Model types and system types:

In terms of trying to better define what properties this (or some better) problem may have that distinguishes it from more broadly defined types of problems that can be solved by a same-input-same-output function, how can we describe this?

The term ‘understanding’ may be forever too vague, perhaps reminiscent of ‘hoping’ for something about which we have no real information (a category that is a request for a defined category). Other existing concepts may be a better fit, perhaps Khanman Tversky’s system-1 system-2, analytic and non-analytic, but the range of machine-learning/ai models are not immediately categorized into two such categories. Even in the history of AI, a hand-crafted decision tree is GOFAI, yet an automated decision tree such as XGBoosted-random-forest type is…at least not described as GOFAI, but is a decision-tree not a decision tree? (are there sub-symbolic trees? or some other ‘branches’ of trees?

Disciplines:

Does this represent a case where linguistics, psychology, logic, math, computer science, engineering, etc., cross-over and cannot be excluded from an analysis of the problem?

(also see the swap case in the classic Kernighan and Ritchie The C Programing Language)

Human, Natural, Knowable, Artificial, STEM, Integrated, Isolated

If possible I would like to raise and introduce a context and topic here without getting lost in the labyrinth it presents us with.

Let’s start with just three (you can image a ~triangle of sets if you like):

- Human Biology

- Knowability in STEM

- “Artificial” Technology

Two counterintuitive things have kept happening over time as H.sapiens-humans have tried out different tasks in a technological way to see if they are knowable(as in STEM) or unknowable( as in something not-STEM, perhaps something super-natural) (which could also mean not-yet-knowable (perhaps interestingly inverted from Karl Popper’s not-yet-disproven)).

At the same time, especially visible in AI, we see the ‘easy things are hard’ phenomena.

This sometimes leads to a kind of ‘boom and bust’ or ‘summer and winter’ exuberance and pessimism around technology. People doubt that something can be modeled by a simple line, or as in the case with ENIAC, people doubted whether you could use vacuum tubes to do the same thing that other mechanisms did more slowly, and were flabbergasted when it worked! Why?

From 1943 (early computer and neural-network work) to say 1963, there was phenomena progress (which it might be important to put in a backdrop of a very substantial tech-driven change in the nature of daily life from ~1870–1970; see ‘the rise and fall of American Growth’) and when the first generation of (‘symbolic AI’) system came out, people got lost in the contrasting vortex between ‘sometimes easy things are STEM, sometimes easy things are hard.’ and ‘sometimes STEM can amplify other STEM, sometimes “generalization” is so elusive as to have its existence in doubt.’

There is a quasi-paradoxical set of ideas here, where H.sapiens humans are the measure of performance and success for AI-tasks, yet AI is (again, sometimes paradoxically) expected to do the tasks the people do in sometimes completely different ways…but sometimes not expected to be too different.

There are two ways in which we expect AI to be similar:

1. “Easy things are hard” We tend to expect that AI will have the same relative levels of difficulty with tasks as H.sapiens-humans.

2. That being good at one thing will ‘naturally generalize’ both for people and AI.

Big Blue is a fascinating case, and something of an outlier to the systems that are usually covered by a ‘what is AI’ survey, given that it is so specific it is more or less not a general purpose computer at all, but rather a jumble of hardware and switches that occasionally works to do one very specific task.

Super-Polarized Natural vs. Artificial:

In some ways H.sapiens-humans take an extreme stance viewing some parts of the world as being the exclusive sphere of ‘nature.’ Here, if I can, I will sit on the fence and encourage dialogue between the both sides without ‘joining a team.’ Whether one invokes culture or mythology or aesthetics (none of which I view or mean here as pejorative (‘bad’) things), ‘ear kissing’ ideas (to use a Shakespeare phrase) such as “Science has no feelings! It’s tough!” or “The soul has powers that math can never have!” are very attractive but tend to be polarizing and not helpful with exploring the fascinating intersection of generalized STEM in a strange world.

Artificial Networks vs. Natural Networks…or just “Networks”?

If only there were ways of mapping the ebb and flow of what has sometimes been called the ‘astonishing hypothesis’(see below) that parts of life we think of as being too difficult to understand such as mind and many parts of biology are compatible with STEM. It is hard to predict when and where people are assuming this is either obvious or completely impossible. Perhaps like how what was once (before Friedrich Wöhler and Hermann Kolbe in the 1800’s, as usual no one agrees on an absolute date) a chemistry-of-non-living-things and a separate chemistry-of-living-things simply became ‘chemistry’ and how the celestial physics beyond the moon and the terrestrial physics of earth (once defined as being impossibly different) simply became: physics (generally attributed to Newton’s synthesis).

“The astonishing hypothesis” is a 1994 book by Francis Crick, the co-discoverer of DNA. And, did you know that Francis Crick is one of the PDP group members who co-authored the authoritative Neural Network Deep Learning book along with Geoffrey Hinton et all in 1986). In fact the section on neurons (not artificial vs. natural, just STEM neuron studies) in the foundational deep learning, artificial neural network, text was written by biologist Francis Crick, as a biologist, writing about neurons in general which includes biological neurons.

Yet, along with saying rude things about Ray Kurzweil, a standard requirement of proper books about AI is to make it very clear that deep learning Neural Networks and biological neural networks have nothing whatsoever to do with each-other. It is as though if any connection is allowed that this would somehow give permission for anyone to claim that AI-machines are full of human brains in jars. Over-reacting to sloppy science journalism runs the risk contradicting the historical record and making discussion of important and difficult topics even more difficult. (In my experience, it has made any conversation, or even raising the topic, literally impossible. The central dogma of AI seems to have become that there is an inviolable wall preventing even a discussion of both biology and AI. This is not consistent with STEM, or much of anything.)

Frontier: General STEM Integration

An unexpected, if not counterintuitive, implication of AI being part of computer science (CS) and part of STEM, is that feedback from research into AI will, quite naturally, help us to learn more about various STEM areas. The expected part may come if research into AI tells us about larger than expected gaps in our understanding of various areas.

Areas we do not yet know how to frame:

- Statistics

- Linguistics

- ‘mind’ and consciousness

- dynamical, nonlinear, chaotic & stochastic, fractal, etc.

- game theory

- automata

- general ecology

- networks

- math-notation

- “symbol manipulation”

- psychology

AI not only re-raises questions about the relationship between ‘hard’ sciences and ‘soft’ sciences, but also different areas within STEM that are not simply the same thing. Areas of STEM work together, but that does mean that math simple is logic which is physics which is statistics which is chemistry etc.

Another possible area, if not openly discussed, is that before the later rise of AI the status quo consensus was that so-called ‘soft’ sciences would just go away, fading away into nothing. Psychology had become pharmacology. Ecology had become a marketing term for ‘organic’ foods (an ironic shaddow-term from before ‘organic’ compounds were generally accepted to even be made of normal physical matter). Sociology and communication studies were given up as post-modern ‘arts and crafts’ utter nonsense, less practical than golf course design as an academic subject. But with the rise of the internet, social networks, mass disinformation campaigns, problems with hate speech,trolling, and bewilderingly successful recruitment of the general public into extremist cults(religious, political, or just strange), and then the rise of LLM GPT AI in 2023, and with the re-discovery of the “cognitive-psychology” and “behavioral-economics” of Khanaman and Tversky, it was not so obvious that these subjects could and should be all be ignored and written off.

Linguistics, still not on most people’s radar, is probably ripe for a renaissance. What happens when people react to advances in Natural Language Processing, aka AI, by asking: So what about the current science of language? What does science tell us about language and the mind? Who wants to say? “A science of language and the mind? You have to be joking. No one’s even tried to look at either of those scientifically since the 1800’s. Important people have been ignoring that question for a long time, and now so should you!”

And yet who has the naive-bravery to openly say (perhaps reminiscent of some old story about an emperor and his wonderful new clothing) that we have no idea what language is. We don’t know what mind is. We don’t don’t know what consciousness is. We’d basically decided in the late 1900’s that mind, consciousness, culture, and even meaning, don’t exist at all; they are just illusions for fools and marketing campaigns. And we don’t know what statistics is. And there is no conception yet of a generalized STEM. And whole parts of STEM are completely ignored, such as Computer Science (as relates to Math and Logic (Null values?)) and project management.

Note: Project-Management and Agile are still not broadly seen as relating to STEM (as with computer science itself) so there are still more changes to come as future integrations build on, and ratchet up, the forms and functions of AI.

Science Fiction

Both for fiction and for non-fiction we would do well to read what people wrote before 1970. While it is amazing if not puzzling that the way people lived between 1870 and 1970 changed so dramatically, it is perhaps more puzzling than amazing that intellectually and culturally the world after 1970 the world contracted in such a reactionary way.

In the books written before 1970, many of them out of print and not online, we have an amazing window in the lost continent of pre-1970 thinking. One area of this is early science fiction.

Issac Azimov’s stories about “robots” (AI) are a wonderful discussion of social and design issues that anyone involved with AI (so…everyone?) should read.

According to the biographer of John von Neuman, in a nice section on self-reproducing machines and automata, the author mentions that a co-inventor of basic concepts for machine self reproduction was actually Philip K. Dick.

A good starting point may be the more than twenty volumes of collected stories that Azimov edited and published (not his own, but ones he recommended) by various authors, too many of whom are now not remembered:

  • Theodore Sturgeon
  • James Blish
  • Fredrick Brown
  • Frederik Pohl
  • A. E. Van Vogt
  • Alfred Bester
  • Poul Anderson

and many, many, more.

History and Timelines

Assignment 3:

Make a timeline of items you think are important.

Example:

Historical Roots

- PDP vol.1 vol.2

- Turing & Biology

- Von Neuman & Biology

- Rosenblatt & Biology

- System1, system2 & ‘cognitive psychology’

Pre-”AI”

1956–1971: First Explicit Era of AI

- 1966: ELIZA

- 1968: SHRDLU, “blocks world”

- The First Internet ~1961–1969

1971–2011: AI Underground

- “Perception”

- PDP, Hinton, Lecune

- Hofstedter, Mitchell

- 1976 https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reason

- 1980: Searl & The Chinese Room

2012–2022: Biology Reborn: PDP Hinton’s

- Narrow Deep Learning

2023: Object-Handling General AI

- OpenAI: “They’re not laughing anymore.”

- Open Models

Pre-AI and Shrouds of Secrecy

- WWII national secrets

- Western blindspot for women

- Western blindspot for software

Looking at who was alive when and how that shaped the narrative.

Death-Day Parties

A N Whitehead: Died 30 December 1947 (aged 86)

A M Turing: Died 7 June 1954 (aged 41)

John von Neumann: Died February 8, 1957 (aged 53)

RonaldFisher: Died 29 July 1962 (aged 72)

Klara von Neumann:Died November 10, 1963 (aged 52)

Norbert_Wiener: Died March 18, 1964 (aged 69)

Walter Pitts: Died 14 May 1969 (aged 46)

Warren McCulloch Died 24 September 1969 (aged 70)

Bertrand Russell: Died 2 February 1970 (aged 97)

Frank_Rosenblatt: Died July 11, 1971 (aged 43)

[1971]

John Mauchly Died January 8, 1980 (aged 72)

Vannevar Bush: Died June 28, 1974 (aged 84)

Kurt Godel: Died January 14, 1978 (aged 71)

Philip K. Dick Died March 2, 1982 (aged 53)

Isaac Asimov Died April 6, 1992 (aged 72)

Karl Popper: Died 17 September 1994 (aged 92)

Alonzo Church Died August 11, 1995 (aged 92)

Konrad Zuse: Died 18 December 1995 (aged 85)

J. Presper Eckert Died June 3, 1995 (aged 76)

Claude Shannon: Died February 24, 2001 (aged 84)

David Rumelhart Died March 13, 2011 (aged 68)

[2012]

[2023]

Martin Davis Died January 1, 2023 (aged 94)

Can’t Join the party..because they are still alive in 2023:

- Ken Thompson

- Brian Kerinigan

- Terry Winograd

What should your brain do when you are reading an old physical book that other people suggest should not exist?

In attempting to find (cheap) old copies of history books, I obtained a copy of PDP Vol. 2 by Mclelland, Rummelhard et al.

A. Paging through this book really made it sink in, for me, that biology and psychology ARE part of the history of AI. The book IS biology and psychology AND it is the oft-cited foundational work in deep learning.

This is not 1940’s early speculation, this comes after the rise and all of symbolic AI, during an AI winter (which is perhaps why it isn’t calling itself AI).

B. I ordered this (as a used old book) because it was referenced as foundational in newer books, but I’m not sure how I ordered it, because it doesn’t appear to exist in web-searches as a past or present book.

Volume 1 appears to be in print, https://www.amazon.com/Parallel-Distributed-Processing-Vol-Foundations/dp/026268053X but there is no clear reference on Amazon to vol 2. There are some vague offers for “both” volumes, but the complete title of volume 2 does not appear anywhere in Amazon.

If you do a google search on the title in firefox…

https://www.google.com/search?client=firefox-b-1-d&q=mit+%22Parallel+Distributed+Processing%2C+Volume+2%3A+Psychological+and+Biological+Models%22#ip=1

the results do not suggest that this is even a book.

If you do the same search in chrome:

https://www.google.com/search?q=mit+%22Parallel+Distributed+Processing%2C+Volume+2%3A+Psychological+and+Biological+Models

you get the same results, BUT you also may see a sales-add bar on the side for the book, on Amazon!

https://www.amazon.com/Parallel-Distributed-Processing-Vol-Psychological/dp/0262631105/ref=asc_df_0262631105/

So Amazon does sell this book!!…but it does not come up any ANY searches. e.g. if I click on the author, James L. Mcclelland, the page lists only vol 1 and his other books.

Why is finding history so difficult? How many other books are lurking hidden in the semantics of searches?

Perceptrons, Biology and AI

Along with perfunctory attacks on Ray Kurzweil, which seem to be an obligatory social ritual for people writing books on AI, another common dance routine is saying that artificial neural networks, and perceptrons, have nothing at all whatsoever to do with biology. Zero. Nothing. Well, maybe they were inspired by biology in the 1940’s but nothing more. Nope. Not a single thing. No connection. No overlap. Absolutely no.

I had heard and read this so many times that it became ingrained in my head, as well as my being dramatically shamed by very credentialed co-workers for asking about comparative network behavior studies comparing biological network dynamics, such as neuroplasticity, and other networks including artificial neural networks.

Please take a minute and read the wiki on Frank Rosenblatt, even just a skim.

https://en.wikipedia.org/wiki/Frank_Rosenblatt

Just like the PDP books, this work is not just distantly influenced by biology, it is biology. To quote one sentence “In 1970 he became field representative for the Graduate Field of Neurobiology and Behavior, and in 1971 he shared the acting chairmanship of the Section of Neurobiology and Behavior.” He was the leader in mainstream neurobiology and behavior studies. Not ‘artificial robot behavior,’ biology.

But something has happened (perhaps starting around 1971) that changed many fields of research so deeply that it is difficult to claw our way back to history.

Everyone I have met and read today is completely convinced that AI has zero connection at all whatsoever to biology, or psychology, or neurology, etc. So what is the best way to juxtapose that with this written history?

The problematic state of AI and computer science history:

1. many untold stories

2. many stories told only many years after the fact

3. widespread sloppiness and apathy:

  • out of print books
  • books of which there is no clear record they ever existed
  • factual inaccuracies:
  • naming errors (sounds like bad code!)
  • wildly erratic dating of events (without sources) mixing up events

before, during, and after WWII. (including in books otherwise well researched such as

A Mind at Play: How Claude Shannon Invented the Information Age Paperback — Unabridged, July 17, 2018 by Jimmy Soni

- Disinformation:

Doubtless when Sara Turing and Kathy Kleinman were researching their books, they met lots of disinformation and discouragement (some troubling accounts are included in in Kleinman’s excellent book).

Many thanks to those who recorded the history for us to have!

To Joseph Weisenbaum’s role in helping the story of the ENAIC-6 women programmers to be tracked down and told!

Without his help, Kathy Kleiman might not have had the resources or support to play her role in both researching the story and organizing events and recognition while the ENIAC six were still alive.

Proving Ground: The Untold Story of the Six Women Who Programmed the World’s First Modern Computer Hardcover — July 26, 2022

by Kathy Kleiman (Author)

To Alan Turing’s Mom who put enormous effort into writing his first biography.

To Andrew Hodges for putting together a fabulous history, and to Durmot Turing for continuing to dig into the story while in ~living memory.

Integrated Stem:

Hard-Science, Soft-Science, Not-Science

? Math-logic:

Hard Science:

- physics

Soft Science

- “creativity”

unknown”

- language

- statistics…

- categories of types of systems

Thorny Questions: Astronomy, Astrology, Divination, Prediction & Quantizing Measures of Change

In Durmot Turing’s book on the history of AI there is a small picture of a Sumerian clay tablet used for making astronomical and astrological predictions.

This is probably off the far end of what can be clearly included in the discussion, but how do systems and languages and codes for historical, often not entirely STEM, systems fit into history?

For example, while discoveries of ‘the oldest’ are bound to keep changing, some of the oldest written symbols include the binary system of the i-ching in the east (said to have impressed Leibnitz), which was part of an older technology for measure change (much as Leibnitz contributed to what became an ultimately binary digital calculus based system for computing change).

How do millenia old technologies such as the i-ching, or astronomy charts, or the antikythera device, relate to types of questions that we ask machine-learning systems today in pursuit of technologically assisted answers?

Did people ask ELIZA the same questions they asked Tarot Decks, Runes, Geomancy frameworks, dream-interpreters, tea-leafs, horoscopes, and that people now ask chatbots?

How much do we know about what alchemists were studying in analyzing the ‘union of opposites’?

For example people still (somehow) speak of digital computers as ‘manipulating symbols’ as though performing divination.

Another way to approach this set of questions might be, how much has feedback from using our tools taught us (H.sapiens-sapiens) about ourselves? Can we see more clearly how we use language and what we mean by ‘symbol’ than 5, 10, 15 thousand years ago?

Do we view language the same way in 2023 as we did in 23 or -2023?

Do we view STEM the same way as we did? (There the answer should be a more clear change over time…even if we cannot see the future trajectory.)

Cognition, Learning, and Non-Monophyletic Trees

As a final twist, let’s look at how hominids do not like thinking about non-monphyletic trees. A classic example from biology, or botany in particular, of non-mono-phyletics is the lilly. The term ‘Lilly’ refers to a number of very different plants that happen to have similar looking flowers. In terms of genetic evolution these plants are not related to each-other. So in such cases, ‘science’ says that ‘the lilly’ does not really exist, because it does not have one root, it is not monophyletic (one historical tree). So it is a kind of mistake, a cultural illusion.

In the case of biological structure such as eyes or wings or limbs, this is less simplistic, as it is not so easy to say that ‘eyes’ or ‘leafs’ do not exist at all because they evolved in parallel from multiple different origins. And yet many people are inclined to say just that.

Or another branch of this science-language-psychology is perhaps illustrated by the fascinating ‘a tomato is a fruit’ phenomena, where in a kind of dogmatic literalism people try to make a cargo-cult collage of cultural language terms against jargon from randomly selected disciplines. And tragically people will often conclude that anything not covered by a jargon term of measurement (not intended for general language use in all situations) does not exist at all.

And so with STEM and with AI, we have a convergent or inverted tree structure where different areas grow together into a natural form. The H.sapiens-human mind very much does not like non-monophyletic trees, and will go to great lengths to avoid dealing with them, so what will AI minds (some of which are images of H.sapiens-human population language minds) do with similar concepts? How will learning from learning about learning learn about what is difficult to learn?

Assignment 4:

Research Something, what do you find?

Example:

No Volume of Founders’ Letters

At least for John Adams and Thomas Jefferson (or as Princeton’s computer science department may have called them, John Anderson and Thornton Jennings) we have a volume of letters for them to express their thoughts. And even with these and more letters and biographical materials there are still massive gaps and unknowns in our understanding of who these people were and what they thought and did.

In the case of the history of computer science, it is almost as if it is a practical joke: what if the most obscure people, under national secrecy, had perhaps everything or nothing to do with some of history’s most important, transformative, and integrating, technologies?

The Founders:

  • Kurt Godel (logic, math, and computer science)
  • Alonzo Church (Lambda Calculus, functional programming)
  • John von Neumann (misc computer science, hardware, software, math, etc.)
  • Klara dan von Neumann (programming and computer science)
  • Claud Shannon (boolean logic, communication theory, AI theory, robotics)
  • Betty Shannon (math & misc)
  • Alan Matheson Turing (a bit of everything)

Depending on who you ask, on what date, any of these people had either everything or nothing to do with anything or everything with complete or zero communication with anyone or everyone (about everything or nothing). Despite that there are, in 2023, probably, still people alive who have direct (or slightly indirect) knowledge about what happened between these people, we literally have no idea what happened between the founders.

Maybe Claud Shannon was deeply involved with AI and even named the field, maybe he had nothing to do with it. Maybe Claud Shannon and Alan Turing were friends who worked together on information-theory (given that Alan Turing invented and used information entropy in the UK before Shannon wrote his synthesis) or maybe it was a total coincidence and they never discussed it or anything of any technical nature. When they had tea every day at bell labs, maybe they just talked about the wall-paper, or stared at each-other in creepy silence, we have no reliable information.

Maybe the von Neumanns knew and worked with Kurt Godel (they were both at princeton IAS) or maybe not. The official ‘witness’ story is that Kurt Godel would mysteriously regularly simply walk into John and Klara von Neumann’s house, sit and read a book silently for a few hours, and then leave. (Again, this is a classic perception-riddle: If there is a witness, then obviously that means someone other than von Neumann or Godel was also there. Godel was famously shy and paranoid, and von Neumann was working on top secret projects, so obviously nothing could be expected to be said while a third party witness happened to be there. Did Godel ever say anything when no-one was there to hear? We don’t know because no one would have been there to hear. AH!)

Maybe von Neuman helped Alan Turing get to Princeton to do his doctoral because he noticed what Turing was studying, or maybe it was a random fluke: he accidentally helped Turning but otherwise didn’t know Turing existed. This might sound ridiculous, but this is the line of investigation laid out in great detail by the fabulous Andrew Hodges (in his biography ‘Alan Turing The Enigma’). Hodges is thankfully frank about what he actually has evidence for vs. what might seem reasonable to assume (unlike other biographies and histories where weaving a compelling narrative is the game). As with computer science in general: nothing is safe to assume.

Maybe Turing, while at the nexus of Princeton, bell labs, and the intelligence-cryptography center, working with Godel, Shannon, Church, and von Neuman, or maybe they had no conversations about anything ever.

Maybe John von Neuman met and consulted with Alan Turing while ‘creating’ the ‘von Neumann Architecture’ computer design that Alan Turing was already implementing in the UK, or maybe they never even met after the war, and it was a total coincidence. Maybe they both turned to studying biology together in the 50’s, or maybe it was another complete coincidence.

Perhaps the only written evidence of any interaction between these figures (after the 30’s) around computer science is also one of the strangest and most difficult to interpret. On his deathbed, in the hospital, with the brilliant Klara von Neumann answering his letters, finishing his writings, and managing his affairs while he was dying and incoherent: note, this is late 1956 or early 1957, John von Neumann gets a letter from Kurt Godel (a person notoriously paranoid and non-communicative, not known to be working with von Neumann or Turing or on anything computer-science related) asking for von Neuman to share his thoughts on Alan Turing’s research and NP-completeness in computer science. Time-line alert: np-completeness is not supposed to have been discovered until 1971 (with no attribution to Godel, or Turing, or von Neumann).https://en.wikipedia.org/wiki/NP-completeness How are we supposed to interpret this letter? Are we supposed to believe that the completely non-communicative, “paranoid” and loony, Kurt Godel, just randomly wrote a letter to someone (who was dying) he didn’t really know, about someone else neither of them knew, about work that didn’t exist yet, in a field he wasn’t working in? That sounds like an unlikely hypothesis. But we also don’t have any information to go on to hypothesize anything else. We don’t even know if he was really writing to John, he could have been writing to Klara, who (if they had been working together) he would have known to be an excellent person to ask about computer science and math, that was her speciality.

Note: The story of this letter comes from the Biography of John von Neumann

The Man from the Future: The Visionary Life of John von Neumann

by Ananyo Bhattacharya (Author)

https://www.amazon.com/Man-Future-Visionary-Life-Neumann/dp/1324003995

Rebutting the automatic war-is-cause thesis

A common causal assertion is that computers, the internet, AI (or any given part of STEM) came “from the military” or “out of the war” or “because of the war effort.”

Nuance aside, this tangled case of the founders of CS and AI may be a counter-example to the common broad-brush argument that WWII ‘caused’ computers to arise (with the presumption that more “war” would “cause” more “progress”).

WWII started after the founders did their foundational work and started collaborating. The war is a key factor in not only the lack of continued communication between the founders but also, if indirectly and not conclusively, their premature deaths (and in the cases of perhaps Godel, Shannon and Church, making them too terrified to ever talk about anything with anyone).

While we cannot say what would have happened if there were no WWII, the walls of secrecy and isolation that separated the founders is clearly not something that helped progress in CS and AI.

Am I saying that von Neumann and Turing should not have helped in the war effort, essentially sacrificing their own lives for public service? No, not at all; read Timothy Snider’s “Black Earth” https://www.amazon.com/Black-Earth-Timothy-Snyder-audiobook/dp/B014X6Q80M if you have any doubts about the clear need to mobilize against the Axis forces). I am trying to focus on the specific question of whether war created collaboration between the CS-AI founders leading to progress. In this case, as we have almost improbably no evidence of any collaboration and so no claim that there was a community of founders, the answer appears to be a very obvious: no. There is also a lot of incorrect recounting of computer developments that actually happened after WWII and or not part of WWII projects (such as ENCIAC related projects) that when incorrectly stated make it sound like all computer history happened during and as part of WWII, but that is not what happened.

For example, Vannevar Bush, sometimes portrayed as a military man who then shaped computing on behalf of the military, built the first useful differential analyser with Harold Locke Hazen at MIT between 1928 and 1931 (not during WWII, not connected with the Military). Most people involved in the history of computer science have a long life of involvement and interest completely outside wars and the military.

Counter-examples might be people like Grace Hopper, whose career with computer software was directly shaped by her posts during WWII.

WWII did happen, and the pair of WWI and WWII shaped and reshaped the world probably more deeply than we can comprehend. But it is incorrect to characterize computer science, the internet, and AI, as simply appearing, like Athena from the head of Zeus, out of war, military, and violence as though otherwise there would have been no cause for anyone to imagine them.

Appendix 1: Data Dump

Andrew W. Appel, the “Eugene Higgins Professor and Chairman of the Department of Computer Science at Prince University” (according to the back cover) thankfully edited and published a facsimile of Turing’s 1938 thesis, along with a few commentary essays and miscellaneous remarks. This is a bit of an appendix of (more) Enigmas to the previous Enigmas.

There is a mostly parallel collection of materials online here, all about Princeton’s connections to Turing, a few other figures, and ENIAC,

https://www.princeton.edu/turing//alan/ . A routine, affirmative, self advertising, anniversary celebration; a great opportunity to post archival material. Everyone wins.

Returning to the book, this is one of the strangest history books I have ever seen.

The suggestion is that Veblen, Turing, Church, Godel, and von Neumann all worked together to build computers in the US after WWII, in some kind of US-UK computer project (that excluded Bell Labs) and brings Konrad Zuse into the story add mind-bending levels of confusion and suggestion.

As far as any normal history:

1. There was, sadly, no connection between the US and UK post-war computer building project. Saying that Turing ‘Did not contribute much’ to the US computer building movement is an understatement: There was no US-UK computer movement; and Turing wasn’t involved at all in the US movement that anyone has yet suggested. So cattily saying that ‘other people were more involved than Turing’ is simply bazaar. Everyone knows Turing had, tragically, zero involvement.

2. There was, sadly, no UK/US knowledge and recognition until after the 1940’s of Zuse’s work. He was not (that I know of) part of Operation Paperclip or brought into NASA or Bell or IAS.

3. I have never even heard it postulated as being possible that the people we retrospectively call ‘the founders’ actually were working together to build computers in the US and UK.

Is the author talking about building the computer industry? Is the author talking about academic pure research? What is the author talking about? If, confusing language aside, the author/editor is talking about soft-ware oriented pure research and not a computer-machine building industry, then A: why does he mixed hard-ware builders into his discussion, and B: if there are so many people who laid out the foundations of computer science so much more so than Alan M. Turing…who are they? What did they say and do, and where is this alternate history?

Turing is mostly known for his ‘On Computable Numbers’ paper of 1936 where he essentially invented computers (the Turing Machine) as part of addressing a Hilbert problem about the nature of math and logic, but this is his 1938 thesis on overcoming “Godellian incompleteness.”

While the princeton website is a routine archive, the book edited by Appel is unclear in basic intent. The book is entirely a somewhat random dump of files about Turing, but the introduction by the author is both sour and contradictory. Appel writes “But as significant as Turing is for the foundation of computer science, he is not the only scholar whose work led to the birth of this field.” But this is a book about Turing, and also not his computer thesis. So what is this book about?

And what does Appel mean by “this field”? By lumping together non-digital, non-electronic, non-program-running, non-software machines together with boolean digital electronic software running machines, it throws the whole context into complete confusion.

The editor also quotes Andrew Hodges, but in an apparently obscure attempt to contradict Hodges without actually providing any details that contradict Hodges. Hodges wrote that Turing (who had famously terrible handwriting and messy notes) hired a professional typist to type up his thesis after getting feedback from Alonzo Church about what to do in a next draft. Appel says: No. Hodges is completely wrong, people often left spaces in documents to write formulas by hand. That non sequitur bit of trivia may be true, but it does not contradict Hodges claim that Turing hired a typist, or got feedback from Church. What exactly is Hodges supposed to have gotten wrong, according to Appel?

The main mystery-item however is this line: “The great engineers who built the first computers are well known: Konrad Zuse (Z3, Berlin, 1941); Tommy Flowers (Colossus, Bletchley Park, 1943); Howard Aiken (Mark I, Harvard, 1944); Prosper Eckert and John Mauchley (ENIAC, University of Pennsylvania, 1946).”

Enigma: How is it that a book written by the head of computer science at Princeton, with no other purpose than to associate Princeton with (or celebrate Princeton’s role with) ENIAC and Alan Turing, not only dismisses and denigrates Turing (and his biographer) but mis-spells the names of BOTH ENIAC creators: J. Presper Eckert, “Pres,” and John Mauchly. Every history book I have found repeats the same passage, over and over: “J. Presper Eckert. Pres, everyone just called him Pres. Pres Eckert.”

John Adam Presper Eckert Jr., or J. Presper Eckert, or Pres Eckert. “Prosper?”

“Prosper” is a rare alternate name for a Shakespeare character: Prospero in The Tempest, as in:

The Tempest — Act 3, scene 3

ALONSO O, it is monstrous, monstrous!

Methought the billows spoke and told me of it;

The winds did sing it to me, and the thunder,

That deep and dreadful organ pipe, pronounced

The name of Prosper. It did bass my trespass.

https://www.folger.edu/explore/shakespeares-works/the-tempest/read/3/3/

How does the head of computer science at Princeton not know the names of the creators of the ENIAC, and the EDVAC, and the BINAC and the first US Computer Company: UNIVAC (and champions of the Women programmers who gave software a solid start).

So, it is a bit unclear what is going on with this book. Why did they publish telegrams to Turning and his return postcard, and a paper about Logic, Alanzo Church, and Kurt Godel, along with mis-spelled and inaccurate names of the ENIAC creators and a random line about there being more important people than Turing in the history of the field of computing?

Yet another twist, to make perhaps an Enigma pretzel: The beginning meets the end, with some odd connection in the middle.

1. Type Theory:

According to Andrew Hodges, the story of John von Neuman and Alan Turing begins with von Neumann’s interest in the Theory of Types.

2. Transfinite Type Theory:

According to https://www.sciencedirect.com/topics/mathematics/turing-thesis :

“ Turing considered several natural ways in which ordinal logics could be constructed: (i) ∧P, obtained by successively adjoining statements directly overcoming Gödel incompleteness at each stage a; (ii) ∧H, a form of transfinite type theory; and (iii) ∧G (after Gentzen), obtained by adjoining principles of transfinite induction and recursion up to a at each level ∧G(a).”

And, according to the wiki on Turing’s logic/types paper:

“Martin Davis states that although Turing’s use of a computing oracle is not a major focus of the dissertation, it has proven to be highly influential in theoretical computer science, e.g. in the polynomial time hierarchy.[4]”

https://en.wikipedia.org/wiki/Systems_of_Logic_Based_on_Ordinals

Remember, what did Godel write to John and or Klara von Neumann about? He asked about Non-polynomial time completeness and Turing’s research.

(Note: Martin Davis sadly passed away in 2023.)

3. While it is more a tangle of questions than answers, we are starting to have at least the formation of an historian’s question: While it is catty and disorganized this very odd book represents a nexus of connection between nearly all the parts that previously we had no way to connect. And while it is not clear, the ‘claim’ of this book by Appel is that at Princeton in the 1930’s there was an active hub of computer science planning and formulation involving Turing, Church, Godel, von Neumann, Veblin, Lefshetz, Newman (and 11,000 other numbered but not named people) with insinuated connections to the rest of the computer science world.

If more from Martin Davis than Appel, we have one more puzzle piece in the link between Turing, Godel, NP-Completeness, and the von Neumanns. We can hypothesize a kind of shaddow computer science consortium including the known founders and others with a specific thread of types, NP-completeness, lambda calculus (Church) and Godel’s system.

Note: The ever elusive Claud Shannon was also at Princeton IAS (the institute for advanced study), and given that Appel went so far as to hyperbolically associate with > 11,000 (un-named) experts, it is curious why he left out Claud Shannon.

Appendix 2:

For another interdisciplinary survey through AI I very highly recommend “Possible Minds”

https://www.amazon.com/Possible-Minds-Twenty-Five-Ways-Looking/dp/0525557997

Appendix 3: An Apple a Day

The Founders:

  • Kurt Godel (logic, math, and computer science)
  • Alonzo Church (Lambda Calculus, functional programming)
  • John von Neumann (misc computer science, hardware, software, math, etc.)
  • Klara dan von Neumann (programming and computer science)
  • Claud Shannon (boolean logic, communication theory, AI theory, robotics)
  • Betty Shannon (math & misc)
  • Alan Matheson Turing (a bit of everything)

While it is a popular sport to take potshots at Godel for being “needlessly” afraid that something might happen to him, let’s look at the fates of our founders.

Turing: Died under mysterious circumstances and inexplicably without official investigation. Now known to have been involved in UK, US, military intelligence after the war. Now widely considered to have been assassinated.

John von Neumann: Died possibly of cancer in his 50’s, under 24 hour military guard in complete isolation because of his intelligence and military status. (Note: While von Neumann’s biography puts a positive spin on this care and attention from the US government and military, Shannon’s biographer mentions this military isolation as something that would have terrified Claud Shannon and which, unlike the gadfly von Neumann, Shannon would have shied away from and avoided at all cost.

Klara dan von Neumann: Official Cause of death: suicide by drowning.

Kurt Godel: Died horribly of starvation (while working at Princeton IAS) fearing assassination by poisoning.

Alonzo Church and Claud Shannon lived to die of natural causes in old age, living ‘normal’ lives.

Claud Shannon, who was eventually pulled by von Neumann into intelligence work, was reclusive and afraid to say anything to anyone about anything he did.

Alonzo Church lived until 1995, telling the world nothing about the amazing story he had a front seat to. Why is that?

Considering the momentus topic, amazingly little has been written about the founders. Turing, though now it seems popular to ‘backlash’ against any attention given him, is no longer completely obscure. But it is extremely difficult to get information about the founders, and even harder to guess their collaboration (with wildly contradictory anecdotal accounts, e.g. whether von Neuman traveled to meet with Turing in the UK or not). In the afterword to the rather long Hodges biography he is open about how only indirect scraps of information were available, attributing this to mathematicians mostly being obscure figures and no effort at the time put into preparing records for future interest they never predicted.

Appendix 4:

Operations Research vs. Agile & CS

Computer Science & Operations Research

Another lead in the ironically cryptic “Alan Turing’s Systems of Logic” book, edited and introduced by Andrew W. Appel, is Appel’s reference to ‘the new fields of computer science and operations research’ as being, rather unclearly, either prefounded or post-founded by either one group of named people, another group of unnamed people, or perhaps yet another group of un-named people (which does make you wonder what exactly he is trying to say, and why he isn’t just saying it). Aside from the bafflement of his chronology, it is fascinating to bring “Operations Research” into the narrative.

As three dates for some timeframe, the first two from the UK’s

Operations Research Society https://www.youtube.com/@Theorsocietypage

1916 OK Organizing shipping Defense, creating procedures and methods

1938 “Operations Research” coined

1948 Rand Corp. founded in U.S.A. (after WWII)

At least at first glance, and I may be completely wrong of course, it appears that there is a kind of parallel between the relationship between Math and Statistics as there is between CS & Operations Research.

Statistics vs. Math

While most people probably associate statistics with math related to probability, the term (if only a historical note) literally comes from the term ‘State’ as in government or administration. Statistics is, of course, math, but more specifically it is math used for management and administration, or governments, or States, hence: statistics.

Operations Research likewise appears to be a very STEM related mix of administration and management, and government, and military-defense, specific methods and procedures (not really clear what to call it). In the US, where the term “Operations Research” is profoundly uncommon (especially in books about computer science or computer science history), as in the UK, the roots appear to be military-defense focused, and at least after WWII in the US the Rand Corporation is considered an example of Operations Research field world (which again has a distinctly military-planning-forecasting theme).

In 2023 Operations Research apparently still exists but perhaps with a less military specific theme, just as ‘statistics’ is hardly only used by governments.

Operations Research (OR) is a fascinating part of the overall STEM puzzle, in part because it appears to make connections between STEM and project and production management (whereas STEM is only maybe embracing Agile literally more than a century after early OR focused on project management), and because OR has somehow has not connected back to either computer science or STEM.

There also appears to be possibly a connection to ‘systems’ thinking in OR, which for whatever reason has been slow to become acceptable in non-new-age circles (generally STEM and ‘systems’ in the US are considered incompatible).

And there appears to be the classic WWII & Postwar military-secrecy theme. As with other question-marks in the history of computer science, Rand seems to be a stereotypical von Neumann mystery.

https://www.rand.org/pubs/research_memoranda/RM1019.html

So maybe, through von Neumann there was a close early computer science community involving:

  • Princeton IAS CS & and Operations Research
  • Rand and Operations Research
  • Alan Turing in the UK\
  • Los Alamos (?)
  • MIT
  • Carnegie Mellon
  • Penn
  • etc. (not meaning to exclude anyone here)

But we do not know, aside from very scattered clues and very unclear suggestions (or just non sequitur editorial clutter not meant to suggest anything) from Princeton.

Appendix 5: Hinton on the Biology-AI Connection Question

In this speaking event at the University of Toronto, Geoffrey Hinton (and Image-Net creator Fei-Fei Li!) speak.

https://www.youtube.com/watch?v=E14IsFbAbpI

https://www.youtube.com/watch?v=E14IsFbAbpI

1hr:48min:12sec

Geoffrey Hinton and Fei-Fei Li in conversation

Premiered Oct 7, 2023

“This week marked the inaugural session of the Radical AI Founders Masterclass, featuring a dialogue between AI luminaries, Geoffrey Hinton and Fei-Fei Li. Held at the MaRS Discovery District auditorium in Toronto, the conversation was hosted by Jordan Jacobs, Managing Partner and Co-Founder of Radical Ventures, to delve into the profound ethical landscapes, societal shifts, and the transformative potential of AI.”

Around ~20 minutes in there is a section where Geoffrey Hinton very explicitly describes his approach and work as “building a bridge” between biology and abstract technologies, making technologies that work closer to biology. And that his background is in psychology, the mind, and the brain.

It could of course be claimed that Hinton is wrong and does not know what he is talking about, but his consistent remarks about the facts of his well published background cannot be denied; in the professional disciplines and perspectives of Geoffrey Hinton, his deep learning technologies come out of, and from an integration with and closeness to, biology.

Appendix 6:

See:

https://www.sciencedirect.com/topics/mathematics/turing-thesis

https://www.folger.edu/explore/shakespeares-works/the-tempest/read/3/3/

https://en.wikipedia.org/wiki/John_Mauchly

https://en.wikipedia.org/wiki/J._Presper_Eckert

https://en.wikipedia.org/wiki/Martin_Davis_(mathematician)

https://en.wikipedia.org/wiki/NP-completeness

https://www.amazon.com/Deep-Thinking-audiobook/dp/B06XWLY5XS/

https://web.archive.org/web/20120901152639/http://www.math.ucla.edu/~hbe/church.pdf

https://www.amazon.com/Possible-Minds-Twenty-Five-Ways-Looking/dp/0525557997

https://www.amazon.com/Parallel-Distributed-Processing-Vol-Psychological/dp/0262631105/ref=asc_df_0262631105/

https://www.amazon.com/Man-Future-Visionary-Life-Neumann/dp/1324003995

https://en.wikipedia.org/wiki/David_Rumelhart

https://en.wikipedia.org/wiki/Pygmalion_(mythology)

https://en.wikipedia.org/wiki/Pygmalion_(play)

https://www.amazon.com/Mind-Play-Shannon-Invented-Information/dp/147676669X/

https://www.amazon.com/Proving-Ground-Untold-Programmed-Computer/dp/1538718286

https://en.wikipedia.org/wiki/J._Presper_Eckert

https://www.amazon.com/Man-Future-Visionary-Life-Neumann/dp/B09M2LTKSH

https://www.amazon.com/Parallel-Distributed-Processing-Vol-Foundations/dp/026268053X/

https://en.wikipedia.org/wiki/Rodney_Brooks

https://www.amazon.com/Learning-Internal-Representations-Error-Propagation/dp/B00CC2EWC6/

https://en.wikipedia.org/wiki/Differential_analyser

https://www.amazon.com/Black-Earth-Timothy-Snyder-audiobook/dp/B014X6Q80M

https://en.wikipedia.org/wiki/J._Presper_Eckert

https://en.wikipedia.org/wiki/Isaac_Asimov

https://www.amazon.com/Rise-Fall-American-Growth-Princeton-ebook/dp/B071W7JCKW

https://en.wikipedia.org/wiki/Philip_K._Dick

https://en.wikipedia.org/wiki/Alonzo_Church

https://en.wikipedia.org/wiki/Terry_Winograd

http://ghn.ieee.org/Oral-History:Claude_E._Shannon#National_Research_Fellowship_at_Princeton.3B_switching_publications

https://en.wikipedia.org/wiki/Joseph_Weizenbaum

https://en.wikipedia.org/wiki/Kurt_G%C3%B6del

https://en.wikipedia.org/wiki/Vannevar_Bush

https://en.wikipedia.org/wiki/Rob_Pike

https://en.wikipedia.org/wiki/Ballistic_Research_Laboratory

https://en.wikipedia.org/wiki/Robert_Griesemer

https://en.wikipedia.org/wiki/Ken_Thompson

https://en.wikipedia.org/wiki/Back_to_Methuselah

https://en.wikipedia.org/wiki/Dartmouth_workshop

https://www.audible.com/pd/A-Grasshopper-in-Very-Tall-Grass-Trailer-Podcast/B09VT8HV34

https://www.amazon.com/Significant-Figures-Lives-Great-Mathematicians/dp/0465096123

https://www.youtube.com/@Theorsocietypage

https://www.youtube.com/watch?v=8bAKJufDWso

https://www.youtube.com/watch?v=ILWbaWrjgU4

https://www.amazon.com/Linear-and-Nonlinear-Programming-_International-Series-in-Operations-Research-_-Management-Science_-228_/dp/3030854493/

https://www.rand.org/pubs/research_memoranda/RM1019.html

https://en.wikipedia.org/wiki/Kl%C3%A1ra_D%C3%A1n_von_Neumann

https://en.wikipedia.org/wiki/Charles_Lyell

https://en.wikipedia.org/wiki/Konrad_Zuse

https://en.wikipedia.org/wiki/George_Boole

https://en.wikipedia.org/wiki/Artificial_neuron

https://en.wikipedia.org/wiki/Charles_Darwin

https://en.wikipedia.org/wiki/Walter_Pitts

https://en.wikipedia.org/wiki/Alfred_North_Whitehead

https://en.wikipedia.org/wiki/John_von_Neumann

https://en.wikipedia.org/wiki/Dennis_Ritchie

https://en.wikipedia.org/wiki/Ken_Thompson

https://en.wikipedia.org/wiki/Grace_Hopper

https://en.wikipedia.org/wiki/Ronald_Fisher

https://en.wikipedia.org/wiki/Claude_Shannon

https://en.wikipedia.org/wiki/ELIZA

https://en.wikipedia.org/wiki/Frank_Rosenblatt

https://en.wikipedia.org/wiki/Konrad_Zuse

https://www.youtube.com/@Theorsocietypage

https://en.wikipedia.org/wiki/SHRDLU

https://en.wikipedia.org/wiki/Norbert_Wiener

https://en.wikipedia.org/wiki/Warren_Sturgis_McCulloch

https://en.wikipedia.org/wiki/Walter_Pitts

https://en.wikipedia.org/wiki/Warren_Sturgis_McCulloch

https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reason

https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)

https://en.wikipedia.org/wiki/Gerald_Jay_Sussman

https://en.wikipedia.org/wiki/Frank_Rosenblatt

https://en.wikipedia.org/wiki/Patrick_Winston

https://www.chessprogramming.org/Edward_Fredkin

https://www.chessprogramming.org/File:ShannonMcCarthyFredkinWeizenbaum.jpg

https://www.amazon.com/Broad-Band-Untold-Story-Internet/dp/0593329449/

https://www.amazon.com/Theory-That-Would-Not-Die/dp/0300188226

https://www.amazon.com/Pioneer-Programmer-Jennings-Computer-Changed/dp/1612480861/

https://github.com/lineality/Online_Voting_Using_One_Time_Pads

https://github.com/lineality/definition_behavior_studies

https://github.com/lineality/object_relationship_spaces_ai_ml

https://en.wikipedia.org/wiki/Mycin

https://en.wikipedia.org/wiki/W%C3%B6hler_synthesis

Authors you should read:

Douglass Hofstedter

Raymond Kirzweil

Durmot Turing

Melanie Metchel

Michael Wooldridge

R.G. Mulgan

Andrew Hadfield

Herman H. Goldstine

Joel Shurkin

Daniel Kahneman

McClelland, Rumelhart, and the PDP Research Group

Sara Turing

Andrew Hodges

B. Jack Copeland

Sinclair McKay

John Ashbery

Sir Eric Ashby

Brian W. Kernighan

Dennis M. Ritchie

Ian Stewart

Robert J. Gordon

Kathy Kleiman

Claire L. Evans

John Brockman

Jean Jennings Bartik

Sharon Bertsch McGrayne

Gordon Welchman

Hobson Lane

Ian Goodfellow

Francois Chollet

Shakespeare

Gary Kasparov

How broad or narrow is AI, or the field of computer science?

If AI and computer science more broadly are part of a larger synthesis of pure and applied STEM fields, one that will redefine and refocus fields such as linguistics, statistics, biology, psychology, sociology, and even the modeling and the scientific method itself, AI may be broader than we can imagine. And while it is periodically predicted that everything that will be discovered has been discovered, we may be lounging at the foot of transformations that significantly reshape daily life, just as life was transformed between 1870 and 1970 (see ‘the rise and fall of American Growth’)

About The Series

This mini-article is part of a series to support clear discussions about Artificial Intelligence (AI-ML). A more in-depth discussion and framework proposal is available in this github repo:

https://github.com/lineality/object_relationship_spaces_ai_ml

--

--