Interpreting Hofstadter’s Gap: AI, Music, Math, Language

Geoffrey Gordon Ashbrook
14 min readAug 16, 2023

--

2023.07.21,08.10 gg.ashbrook

I. Music, Math, Text

In a rhyming dance of quandaries through the ages and perhaps exemplified by GEB [and perhaps Finnegan’s Wake, …maybe Tristram Shandy?] is now manifesting in a learning gap that may be an invisible stumbling block for AI. In short, the often literally pictorial figures used to show non-text symbols for math, music, and other areas in most published texts are, being ‘non-text’, invisible for either discussion with AI such as GPT4 and possibly even as training data (though exact details of training data are unknown). For example, many wonder why the math skills of AI are especially poor; part of this large and interesting story may be that much printed math material falls outside of possible training data (pictures, not ‘text’).

While it may sound surprising that we have not finished knitting together math and language and computers and all the other STEM areas, for all reasonable intents and purposes, we have, at the same time, in various ways not yet committed ourselves to recognizing this as a goal.

“Terrified”

In the fabulous introduction to Melanie Mitchell’s great book on machine learning, she recounts a poignant story (that I would not be able to do justice to here, please buy her book!)

https://www.amazon.com/Artificial-Intelligence-Guide-Thinking-Humans/dp/0374257833/

about this set of topics coming to a head and leading to some unresolved confusion when Douglass Hofstadter was invited to speak to researchers at Google. In very short summary, Hofstadter was concerned about directions that might be taken or revealed at the intersection of math, music, and AI, but in a way that confused the researchers who had invited him to come and share his thoughts. He was worried about what we might discover about the nature of, for example, musical patterns in the human world, whereas the researchers were entirely focused on studying the AI-side of things (not worrying about what that might reveal about biology). (The real story being of course much more interesting than my procrustean abridgement…)

In what in other contexts may seem merely an oddity, during medieval music studies there was a time when the study of ‘pure music’ was so compartmentalized from performance that it became a kind of sub-set of math-logic-computation. (Note: if you have not, take a read through Kepler’s original writings on the structure of the solar system and you may be in for a surprise or two.) On the one hand this extreme stance on pure vs. applied music studies may seem a kind of neutral or pejorative decadence to require ‘serious’ music to exist only in formal logic and math and never to be played (at least from the point of view of someone looking to buy tickets to the orchestra), on the other hand this history seems at odds with our present day pickle of not being able to combine music-math and math-math, with computer-math. What kind of a gap could there be here?

(Prepare yourself for another extreme summarization…) We train AI models such as LLM-GPT (at least in 2023) on texts and the AI makes a thought-intelligence-concept-map by looking at the relationships between all the terms, ideas, concepts, etc., in those texts. But this only works (as of 2023) for text…not pictures or sounds, or hand gestures, or anything not printed in characters in linear sequences on a document. And while music and math have evolved symbol-systems, those are very-usually not expressed in documents as linear sets of language-characters, not ‘seriealized,’ which means that AI cannot see them any more than they can see the facsimile plate of a Cezanne bowl of fruit on page 100 of an art-history book, though it digests any description and discussion there may be of what is on that plate.

The history of the evolution of math notation, and the movement to make expressions of math more standardized and formal and rigorous, is long and is a story we are still in the middle of. Both Alan Turing and John Von Neuman (two figures at the head of conceiving of, and designing, and building, and programming, the first generation of digital computers) began as mathematicians assigned to work on “Hilbert’s Problems

https://en.wikipedia.org/wiki/Hilbert%27s_problems which, like Russel and Whitehead’s project (intersecting with both Godel (and Church) and Turing) to, if I am permitted to contextualize this way, move toward a generalized or unified STEM where math, logic, the sciences, engineering, technology, medicine, etc. (and later computer science) would all be fundamentally compatible (speaking the same language as it were), though at the time (around 1900) the stated goals of Hilbert’s challenges were probably phrased and paraphrased more modestly as seeking some technical overlap between logic and math, and tidying up loose ends not formally defined, as a grand generalized STEM was (see Sir Eric Ashby https://www.amazon.com/Technology-Academics-Universities-Scientific-Revolution/dp/1015312659/ & C.P. Snow et al) too mind-blowing to publicly admit to thinking about. (In the 1950’s both in the UK and US this inability to conceive of connections between STEM areas was a significant obstacle to actually building the first computers which involved people from many STEM areas actually talking to each other and working on the same project, which proved beyond human comprehension in principle at the time (see Andrew Hodges’ “Alan Turing: The Enigma”)

https://www.amazon.com/Alan-Turing-Enigma-Andrew-Hodges/dp/0671492071/

Part of what makes GEB such a rich spring to return to is the bold way in which Hofstadter weaves these themes and music and math and the richness of language together in questions fruitfully, as a wellspring of AI and of humanity, not posing the intersections as a stumbling block precluding any future progress.

Is it not strange that the very topic, the very intersection, that GEB focused on to electrify minds all around the world and resurrect the AI movement after multiple proverbial planet-sterilizing impacts of funding cuts and…boating accidents, mysteriously poisoned apples, and all manner of misfortunes, the very intersection that birthed the ‘music of the spheres’ science of astronomy, the very intersection that set the 1900’s on its Hilbert course with destiny, and also the same intersection that ‘terrified’ Hofesteder (and utterly baffled Google), and is now the same intersection that confounds the cutting edge of AI intelligence…is this all not very oddly all continually orbiting around and echoing off the same intersection-gap?

II. Hofstadter’s Gap

Is there a ‘gap’ of some sort here, or is this a simple translation process that happens to not have been solved and automated yet?

Is there anything more here than just doing the work to create processes to connect these dots, basically making a text-serialized system to contain the less linear picture-diagrams of “symbols”?

Representation in a context of externalized representation:

In music performance one of the main factors is how many people are performing together and how are they coordinating, and depending on the music tradition there are various “systems” of procedures and rules by which different types of instruments (including the human voice and percussion which may also be just the human body) are able to coordinate, often (though this has largely be forgotten in western classical music) with a balance between improvisation and a shared framework.

What does the representation contain?

In the case of computer code, the representation must be complete (or call specifically for other functions that are needed). In the case of music improvisation the code is usually deliberately incomplete: sufficient for use but not the entirety of the process.

Math, interestingly (likely a controversial subject I am naively wondering into here, sorry about about) appears historically to be somewhere in between. The reputation of math in the west is often the super masculine hyper-serious high-brow noble strict posturing, with affectations of being as complete as assembly language. But historically and across various traditions (e.g. traditional japanese geometry puzzle-exchange as part of some shito-shrine traditions, or renaissance Europe’s public testosterone math competition battles judged as sports to a crowd)) much of math seems to be more like music, more a recreational social dance than a maker-STEM engineering-project. E.g. for most of history integrating math with STEM was something many mathematicians fought and despised, taking pride in their purity in separation from worldly parts of the world, even up through WWII with a leading german mathematician taking pride and reassurance that is life’s work was of no use to anyone (specifically meaning not useful to the military in this case).

So we still have questions about Hilbert’s questions. Hilbert, I conjecture, though I may be wrong, wanted (or would have wanted) math to take its place as part of STEM, to be testable and clear and not only an arcane branch of pure poetry where leaders in the field did not, and were not, expected to explain and show what they claimed and did. But is this STEM-Math all of math or just part of math, all of the math-mind-imagination or just a useful edge of a larger less confined space?

If there is (as surely there is) improvisational music, and if music is hip-together close with math, is there not in some way improvisational math (can the two be completely separated)? Yet does it make sense for improvisational math to be required to be in the same category as STEM machine-code type math?

As with music, you can create a rigid, fully-defined, system of exact and unyielding frequencies and volumes and tempos and official terms and techniques, but the more strict that system then the fewer actual world music systems, arts, skills, and traditions, will be compatible with such a ‘complete-framework.’ And there are a lot of formal traditional music traditions out there.

I am very much not arguing, as some people of a more post-modern, or social-constructivist, or this or that school of thought, may argue, that STEM is somehow ‘just’ another soft social free-for-all merely pretending to have engineering precision. I am asking more about the overall space of definitions and projects.

While there does not seem to be, or need to be, a one size fits all mega-STEM framework that defines everything in existence, that is not the goal or meaning of a generalized-STEM. Connecting the various areas of STEM is just this, connecting and translating between expressions in each area of STEM. Being able to express and translate between and across expressions in math, logic, music, philosophy, physics, engineering, medicine, statistics, chemistry, biology, computer science, does not pre-determine what you are doing with those expressions (or even if you have any idea what you are doing). Perhaps going back to a music-analogy, sometimes improvisation is the goal.

Sometimes we are not trying to define a method (to find an answer to a question). Sometimes we are not trying to define an answer (to a question). Sometimes we are not dealing with a question or problem at all.

There is a strong attraction to familiar analogies. In the 1800’s it was clocks and steam engines. In the 1900’s through 2020 it was digital computers.

While it is important for us to understand assembly language running on a cpu, the world is a large place full of ecosystems and signals and generation and equilibria and many forms of system dynamics that we are still struggling to begin to map. And while writing a bare metal system to run on architectures may knit together known STEM, it does not bring us closer per se to the larger world we need to understand, and in which we are living perhaps or perhaps not planning to survive in. And this old teeming canyon of Hofstadter’s Gap may be a compass heading for what our eyes can not yet focus on.

III. Frontier

We may be tempted by circumstance to orient ourselves toward Hofstadter’s Gap in such a way that this feature of the world is something unsightly to be resolved and fixed and solved and hushed up and put behind us and smoothed over, but is this gap a potential?

Could Hofstadter’s Gap be not a human error in procrastination in not yet finding tidy closure, but more in the spirit of GEB, an endless frontier of convoluted quasi-intersections? Is this rather a potential that fuels not only wonder but the mysteries of creativity and the progress through life’s largely imponderable quandaries, through all the unknown fissures of T.S.Elliot’s ‘heap of broken images’?

Is this not a mistake or obstacle but some kind of device in and of itself that we do not yet understand? Or some rich riparian zone like the “Canals” of Mars that were once imaginatively if naively thought to be waterways and the site of life…and then later in a very different and altogether more propper and scientific way…thought to be waterways and the site of life! Is this a kind of door into the hollow-earth of the fabric of Charles Dodgson curiosity-patterns…of how STEM, like four valent carbon, is somehow so creative a set of linkages?

For example James Gleik opens his “The Information” with a discussion of traditional African talking-drums in a context of internet-like signal exchange protocols and information theory. The intersection of music and signals and networks and encryption and decryption, is not meant to be a finite ‘solved’ system, but a working and open ended platform. Above we discussed how the even tempered scale is not a signal protocol that works for many traditional forms of music. So what then is the science of working protocols? The world is full of working platforms, platforms that allow for improvisation and experimentation.

IV. Generalized STEM, Generalized Projects, Generalized Collapse

There are both more intersecting connections and more incongruous gaps than we often suppose at a given time. While in 2023 many more people than in 1923 are likely open to the idea that a mathematician, an engineer, and a scientist may appropriately be permitted to speak with one another (and beyond permission that they obviously should and must in order to get anything done), there are still many vague, empty, and undiscussed areas of a larger problem-project space of unified, generalized, STEM that remain unpopular.

Those focused on in the report that this article is hoped to supplement include:

  • Projects
  • System Collapse
  • Ethics
  • Participation
  • Learning

And there are no doubt other areas that will in retrospect seem obvious yet at present do not come to mind. For example, people looking back on year 2023 hundreds of years in the future may find it inscrutable and inexcusable that civil and criminal law, and education, are not considered compatible with logic and testability. Or on the other hand, perhaps they will have evidence to the contrary.

V: Equations as Numbers

Even the question of ‘symbol manipulation’ in the history of computer science, which may go back to the laudable and foundational writings and foresight of Ada Lovelace, should lead one to…have a pause. Still rather awkwardly in the 2020’s we rotate the cannon and dutifully recite: “computers manipulate symbols and anything that can be represented by a symbol can be manipulated by a computer, in heaven’s name we pray,” despite that this forces us into ridiculous twists of language and logic, for example requiring ‘symbols’ to refer only to relative electrical currents as “physical symbols.” …What?

But while it is ridiculous to perpetuate the narrative that all of the worlds’ mostly magical symbols can simply be fed into a computer and computed with, the perhaps similar-ish topic of handling functions and equations as numbers (not as sigils), is more to the point.

What does it mean to have a discrete and unit-test-able function that returns not a value but, in a sense, another function, or an equation?

In a way this brings us, as usual, back to the 1930’s when Kurt Godel, Alanzo Church, and Alan Turning were all working on Hilber challenges and consequently discovering and inventing aspects of computer science. e.g. Kurt Godel’s “Godel-numbers” are numbers that refer to functions (roughly). Alanzo Church created what is now known as ‘functional programming’ (and the “Lambda Calculus” of functions) where functions are the unit of computation and one can speak of a function that inputs and outputs other functions. And Alan Turing created the paradigm (often attributed to John Von Neuman (who helped Turing to get his start in academia though the communications (if any) between the two are mysteriously few and lost to WWII secrecy)) of having computers mix code-as-numbers with data-as-numbers together in the same “Von Neuman” computer architecture (roughly stated).

VI: Categories of Types of Signals and Responses

As a general place to start we might assume that Categories of Types of Signals and Responses follow categories of type of systems, from system and definition behavior studies; roughly:

- logic-math

- statistical/probabilistic interface

- one-tree of core linear descriptions of physical phenomena using statistically-bridged math-logic

- dynamical systems connected to core linear systems

Perhaps with the turing-church twist, that logic-math can be either equation/function or value.

And while it is rare, we may need to add quantum and relativistic edge cases if they are too irregular.

There is the core CS history-mystery of Hilbert-Completeness and Interminged instructions with data:

- Kurt Godel (often not mentioned as a direct contributor to computer science)

- Alanzo Church

- Alan Turing

- John Von Neuman

Instruction vs. Data (or not a dichotomy?)

The intermingling and interaction (sometimes unintended) of low-level-instructions for the computer and higher level data to be handled by programs is most usually characterized as a dichotomy. But perhaps this should be broken into more categories that cohabit in information substrates?

1. machine/operation instructions

2. functions/equations

3. variable values (strings, numbers, arrays, etc)

4. bridge-translation-representations (e,g. Godel numbers?)

(a type of machine instruction? …not an operation…? )

VII: Training and Learning

While there is a fair quantity of commentary about the inability of AI to answer math questions, there is suspiciously little attention paid to at least two points.

1. There is no way for LLM-GPT AI to see the math literature to train on.

2. Even though there are probably millions or billions or maybe more than that datasets available or at least mentioned online, as of 2023.07 the entire topic of a math question answer dataset is utterly nonexistent.

The entire process of interfacing AI with math has not even begun…in late 2023…and the often very credentialed people who are critiquing AI-math ability have somehow not noticed this, carrying on as if there is some phantom math-ai industry.

So much commentary about why “other people” have not somehow trained AI to do math, and also a complete blank on even the most basic initial steps: A ponderous gap that keeps on gapping.

VIII: Neighbors and Connections

Three or more areas that bare a striking resemblance yet which are nevertheless separated by gaps are:

1. cellular automata & neighbors (with discrete rules)

2. matrices (without neighbors)

3. subsymbolic artificial neural networks (entirely made of neighbors, but with importantly nuanced rules (neural networks in 2020 vs. 1950)

4. digital memory storage (usually without neighbors)

5. ‘Tapes’ of instructions (as in a two dimensional linear turing-automata)

In some ways these systems are so similar and parallel and analogous that they appear to be obviously part of one archetype or mode of system, but at the same time a single implementation that combines all the properties of all of the above is nonsensical. Here again we have a working platform with fruitful gaps, and recurring themes.

See:

https://www.amazon.com/Artificial-Intelligence-Guide-Thinking-Humans/dp/0374257833/

https://en.wikipedia.org/wiki/Hilbert%27s_problems

https://www.amazon.com/Alan-Turing-Enigma-Andrew-Hodges/dp/0671492071/

https://github.com/lineality/object_relationship_spaces_ai_ml

https://www.amazon.com/Man-Future-Visionary-Life-Neumann/dp/1324003995/

https://www.amazon.com/Fancy-Bear-Goes-Phishing-Extraordinary/dp/0374601178/

For more discussion of theory and practice of the mixture of machine instructions and information values, or instructions vs. data, see:

1. “Fancy Bear Goes Phishing”, by Scott J. Shapiro

https://www.amazon.com/Fancy-Bear-Goes-Phishing-Extraordinary/dp/0374601178/

2. “The Man from the Future, The Visionary Life of John von Neumann”, by Ananyo Bhattacharya

https://www.amazon.com/Man-Future-Visionary-Life-Neumann/dp/1324003995/

About The Series

This mini-article is part of a series to support clear discussions about Artificial Intelligence (AI-ML). A more in-depth discussion and framework proposal is available in this github repo:

https://github.com/lineality/object_relationship_spaces_ai_ml

--

--