Want to excel in AI? Cast a wide net.

The Kigumi Group
Kigumi Group
Published in
7 min readOct 2, 2023

AI is a field borne from the most vital, essential part of the scientific method: the desire to enquire. Enquire far and wide, high and low, beyond your own discipline and scope of work.

If you want to understand the field of AI and its associated or internal subjects (machine learning, neural networks, etc) you have to unlearn what you think about science.

What I mean is that, if you are the average person living in the globalised world, you must unlearn what you think of as the practice of science.

We commonly think of science as bringing us greater certainty about our world. Once we can scientifically observe something, identify it, measure it and in some way quantify it, we tend to believe we have a greater amount of control and predictability over it. This presumption applies to science about ourselves, about our surroundings, and more generally about how things work.

This belief in certainty is — literally — the opposite of the core of the scientific method. I would venture to guess that major proponents of the scientific method from centuries past would gag if they saw the bland enormity of the cultural myths we’ve created to sell ‘science’-based products like FitBits, Apple Watches, or AI-assistants for tutoring, education or mental health.

Moreover, if any of the founders of the field of AI had believed that AI was a highly specialised, scientific field composed of measuring bits of discrete data that led to greater levels of certainty and predictability, they would NEVER have achieved the things that have led to our current day generative AI. These people were scrappy, kooky, curious, often-mistaken, and — most importantly — comfortable in their own uncertainy and broad in their interests, often drawing inspiration from ‘non-scientific’ fields like philosophy, history and the social sciences.

In this article we’ll take a look at how AI as a field could never have come to its current fruition without people who thrived on uncertainty, ambiguity, and the exchange of new ideas across disciplines. All of these examples are also inflection points in the (condensed) history of how machine learning and AI emerged as a field of study after World War II.

Most of this next part is the result of conversations with one of our Academic Advisory Panel members, Professor John Goldsmith of the University of Chicago. If you’re interested in deep dives into any parts of the following history and understanding the nuances and ideological or technological aspects of particular events, we highly recommend John’s book Battle in the Mind Fields (2019), which he co-wrote with fellow linguist Bernard Laks.

For clarification, in this article we understand the condensed history of neural networks as organised into the following generations:

An overview of the generations of neural network development.

We won’t talk about each of these generations in depth. (For explanations of each of the generations — which may differ slightly from our view, according to interpretation — we recommend “Talking Nets: An Oral History of Neural Networks” or David Goudet’s snappy piece here.)

Instead, we’ll pull out a few examples from the above generations of how AI as a field requires an appreciation of uncertainty to illustrate the point that, if you want to excel in today’s profession of AI, you must seek out ambiguity and constantly enquire beyond your specialisation.

Let’s take a look.

The cover of “Cybernetics: The Macy Conferences 1946–1953,” the published summary of the events sponsored by the Josiah Macy, Jr. Foundation.

Exhibit A: Enquiring minds at the Macy Conferences

It’s the mid-1940’s in the US. Americans are not only reeling from the suffering and cultural and economic ruptures of World War II, but are also continuing to nurse the scars and intergenerational traumas of the Great Depression. There is enormous uncertainty about whether the economy will get back on its feet. This is, as John notes, the time when “computers came into their own” and when a lot of smart people in academia began applying their intellects to the question of what should count as human skills versus machine skills and the relationship between human and machine potentials.

One of the major thinkers in this area of human-machine potential was MIT professor Norbert Wiener, who penned a small book called “The Human Use of Human Beings” in 1950 that asked questions related to human skills and employment in a world that would be increasingly supported by machine production. This book, combined with Wiener’s preceding book (“Cybernetics”, 1947) launched a new transdisciplinary field called cybernetics that laid the groundwork for much of what happened in the AI field later.

The cover the Wiener’s seminal 1950 book “The Human Use of Human Beings.”

While there is a lot to say about the field of cybernetics, both conceptually and historically, we’re going to highlight one major aspect of it here that relates back to our idea that “AI at its core is a field of ENQUIRY, not certainty”. Here is the important point: Cybernetics was (and is, for its present-day proponents) an incredibly diverse field composed of academics from disciplines that did not usually talk to each other.

It turns out that talking (a lot) to people who are (significantly) different from you can lead to great ideas. This is what happened with cybernetics, which acted as the catalyst for a series of 6 meetings between academics between 1946 and 1953 called The Macy Conferences (named for their philanthropic sponsor). A lot of useful stuff happened at the Macy Conferences that — as noted above — directly laid the foundation for what we now consider to be achievements in the AI field, including the codification of a new concept: information.

Let’s recap that. The original concept of information was first introduced by C.E. Shannon in his 1947 paper “A Mathematical Theory of Communication” and further elaborated on as part of the discussion that happened at the Macy Conference he attended that year (and in ones later).

That’s a pretty big deal.

What we see here is one example of how an intellectual breakthrough (that would shake the foundations of how we speak, act and behave; think about trying to go one day without using the word “information”) was the direct result of deep, enquiring exchanges between people from scientific fields (like neuroscience, computer science, mathematics, and medicine) and people from non-scientific fields (like psychiatry, sociology, anthropology, psychology and child development). Without this atmosphere of enquiry and interest in moving beyond the ‘technological’ field (or what would have been the equivalent at the time), the concept of information would not have entered our cultural lexicon (or would have taken longer, or entered in a different form).

A diagram of a Hebbian cell assembly, one of the theories introduced by Donald Hebb in the 1940’s that now acts as one of the core principles of neural networks. (Image source: The Use of Hebbian Cell Assemblies for Nonlinear Computation, 2015)

Exhibit B: The first generation of neural networks

The second story is from the first generation of thinking and publications that resulted in what we now know as neural networks: specifically, the work of Warren McCulloch and Walter Pitts at the University of Chicago and Donald Hebb at McGill University.

Here, we see in spades how diverse the origins of neural networks are. McCulloch hailed from the field of neurophysiology and Hebb from psychology. Both studied the way human brains worked from their particular perspectives.

Pitts never went to college and never obtained a degree from any university. He was a brilliant homeless teenager when he met McCulloch, and he was probably the inspiration for Matt Damon’s character in Good Will Hunting. An immensely talented mathematician, he was discovered and mentored by McCulloch during a period of homelessness.This intellectual partnership — during which, to be clear, a university professor opened his home and mind to the input of a homeless teenager who had no degree and little proof of formal training — resulted in their publishing a seminal paper that combined contemporary neurophysiology (how the brain is structured) with the philosophy of Leibniz in an attempt to discover the architecture of a machine that would simulate the brain.

A way forward: unlearning the idea of specialisation

If any of these thinkers had stayed within their specialisations, they would not have arrived at the conceptual syntheses we now benefit from in the area of AI. If any of them had closed down their minds to the broad field of conceptual possibility, or had refused to interact with people who looked, talked and spoke like them, they would never be in (literally) the history books.

So unlearn this: AI is a highly specialised, techno-scientific field.

And replace it with this: AI is a field borne from the most vital, essential part of the scientific method - the desire to enquire. Enquire far and wide, high and low, beyond your comfort zone.

That is the vow of a true scientist, and if you are someone studying or working in the field of AI, this should be your north star, too.

About Us

The Kigumi Group is a social enterprise based in Hong Kong working to build the next generation of ethical tech companies. For more of our articles take a look at our publication (Kigumi Group) or visit our website.

--

--

The Kigumi Group
Kigumi Group

The Kigumi Group is a Hong Kong-based company focused on applied ethics and values-based development.