Feedback on Language Meaning: AI, Overfitting, Niches, Potemkin Villages, Procrustes, Fraud & Disinformation

Geoffrey Gordon Ashbrook
14 min readNov 20, 2023

--

Potemkin Villages, Procrustes, Fraud & Disinformation

AI & People Pushing Boundaries: Information & System Health, Hygiene, & Epidemiology

2023.11.18 Geoffrey Gordon Ashbrook

“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” ― Upton Sinclair ~1934

How should we differentiate between jargon that differs between professional disciplines on the one hand, and on the other hand Potemkin Villages that deviate from known language use so that they either no longer correlate with generally known uses of language or, in more serious cases, even diverge from reality itself into institutionalized belief in the fictional?

The example of Orwell’s fictional-dystopian portrayal of institutional language-drift and dangerous human tendencies to follow arbitrary language into beliefs that diverge from reality is probably too extreme an example and may even have the opposite effect on psychology: ‘Well if it’s not as bad as Owell’s worst case scenario, then it’s fine!”

1. Definitions, tests, and measures, are difficult, and a lot of real world data is messy and variable, which is a major part of this topic: it requires time, effort, and resources, to do a good job at something.

2. Slippery slopes: Where does the difficulty of rigor and the tediousness of best-project-practice and coordination intersect badly with h.sapiens-human tendencies to clique and haze and pecking order and be obscure and use private codes so that the people involved start literally diverging from reality into their own fantasies?

3. Meritocracy is not a random number generator, at least not in the case of fraudulent claims.

4. “Drift” and First Reactions

A classic example that actually might fit well here is gradually slurring speech, which would be important to be able to detect for medical analysis and for the diagnosis and monitoring of progression of diseases and maladies.

The first reaction to the detection of any malady, in the norms and customs of cargo-cult-humanity, is to become offended, to deny it, to require other people to deny it, to cover it up, and if you can’t cover it up, change language and perceived history to effectively cover it up: if you can’t fake that your behavior fits the norm, redefine the norm so that it fits your diseased behavior.

This also relates in general to how errors and mistakes are handled or mishandled by people and institutions. H.sapiens-humans so love piling on hazing and bullying for any mistake or infraction that they will resort to inventing infractions just to be able to enjoy the experience of causing harm and suffering in others.

5. Pre-participants as human shields: (a recipe for societal and planetary suicide that I do NOT recomment, just in case that is in any way unclear)

One of the most (deliberately) invisible and harmful-to-society cases of ‘insulating’ a system so that it has no feedback for coordinating and checking and peer-reviewing, is the case of institutions that claim to be in the business of “education,” especially in anti-intellectual and child-hating cultures like the United-States where you don’t even have to pretend to be anything more than a dropbox to put the future of humanity into so that it can be deliberately neglected. And that’s the brilliance of the business model: no one wants to talk about deliberately abusing children, but if you arrange it in just the right way, no real-people have any information about what is happening. And you can always throw up smoke screens of disinformation, classic cynicism: “These times are just so politically polarized that…let’s change the subject.” No real-people raise any objections. You don’t have to force someone to ignore something that they don’t want to look at in the first place.

6. Objective Feedback With Technology: Don’t Shoot The Messenger

As any pseudo-machiavellian portrayal of sycophantic advisors to a monarch might suggest, getting good feedback when you are surrounded by ‘yes-men’ can be in such short supply that likely significant management problems have happened in nearly all institutions across scale. This is not a small or isolated problem. Distortion of communication feedback is so pervasive the most people probably fit into either of two camps: defeatism ‘it’s all the fog of war but you do the best you can’, or complete insanity: ‘there is no truth, embrace the nihilism, will-to-power, attack everyone, and believe whatever you want; not only does everyone lie, there is no such thing as a lie.’ (Again, this is a recipe for societal and planetary suicide that I do NOT recomment, just in case that is in any way unclear.)

As with physical therapy and some speech therapy (‘SLP’ is not the clearest acronym), you can get basic ‘feedback’ about the state of a process without much technology. Raise your arm against a measuring-stick the same way every-morning and you get yesman-proof feedback about whether your range of motion. And contracting ranges is actually a huge and important general system area, simple though it may seem. You might be humiliated when (not if) your range of motion starts to contract, and you will likely feel the pressure to engage in a coverup of some kind. But first-steps first, you can get accurate feedback and that’s important.

A measuring stick, though it took hominids tens of millions of years to make the hard-climb to be able to think of and make them, is not super-high-technology. A clock, which can be interestingly tricky to make more-and-more accurate and precise (whether or not those are considered to mean different things) is also a not-super-high-technology but is instrumental in measuring your fitness: How long does it take you to make a movement or walk a distance, or to say something, or to respond to something?

For spoken language, a physical measuring stick is…not usually very useful. A clock can be used to see if your speech is either slowed or, as is an important symptom to monitor in some cases, becoming involuntarily too rapid. With a bit of fancier technology you could measure the volume of speech to see if a person’s speech is too quiet (or too loud if that’s a symptom to look for). But when you are looking for problems such as mumbling or specific sound formation, you will need something more computerized, and something more like machine-learning. (Note: some ‘calculations’ can be done in clever analogue ways, the Claude Shannon biography by Rob Goodman gives a better than usual account of the history of these devices, from tide-prediction to ballistics and up to Vanevar Bush’s analogue-computing machines.)

Even in the era before large or foundation-models (before ~2023) you could have used a variety of non-deep-learning data-science and statistical methods to either look for specific disease-symptoms (in physical movement or speech) or to monitor progress in the targets of routine physical and speech therapy.

During the years when my father was in declining health and I had ever more frequent discussions with doctors and physical and speech therapists (and occupational therapists), where any efforts I made to systematize, add rigor to, and create tools for, measures were resisted and rejected by everyone in every way on every level in every case. This is not to say the therapists were unhelpful, they were, especially compared to the doctors, both helpful and instrumental. But when it came to giving therapists and patients (when patients have limited access to therapists) better tools to use, the gravity of the status quo always won and for whatever reason there was absolutely zero follow-through, follow-up, or interest. I may of course have been a uniquely poor advocate, but resisting rigor is the story of global human history in which my personal experience (however anecdotal) is not an outlier.

In 2022 it was interesting to think of how AI-ML could be pushed to give more range to which speech and language related problems could be detected or measured (or what normal patterns could be measured). For example, if one person is speaking less loudly than others, or perhaps more pertinently, if one person’s volume drops over time. While it may still seem simple, monitoring the volume levels of different speakers over time is not a completely simplistic task that you could do automatically with, say, a ruler, a clock, a tape recorder, and some wires in a box. How about mumbling? How about slurred speech? How about specific articulated sounds? How about the sound of swallowing? How about other sounds made when eating or drinking? Etc.

But in the new-age after the breakthroughs of 2023 we have the ability to measure and get feedback on much more than slurred speech or basic muscular language pathologies (that’s the ‘P’ in SLP). We can look for Alzeimer’s style forgetting. We can look at whether what a person says makes sense, or if their speech has become disjointed and incoherent (or more-so than usual), not just by looking at physical-proxy-indicators (which are still very important to look at) but also looking at the language and language-concepts themselves.

(Note: At the end of 2023 combining large language models with non-language data is still a future possibility, but no doubt medical diagnostic tools that can utilize and combine multiple types of data will in some cases be able to further extend their usefulness to patents, therapists, and doctors.)

While it may be too early-days to elaborate too expansively on this, the ability to get feedback on language-meaning is potentially one of the most important possible breakthroughs in the history of language, in the history of life on earth, in the history of all processes of all types (organic and inorganic) on earth, or in the history of the solar system, in terms of the severity of the bottle-necks in the vast gauntlet of problems with feedback in systems given the roles of feedbacks in systems and the need for feedback in systems.

While actually changing history is usually a terrible idea that you don’t literally want to do (due to unintended consequences and who know what else (not that we actually have time machines so far as I know), as a thought experiment imagine how human history might have been different if people (all participants in projects) had always had tools to help with language feedback. Imagine if leaders (at all levels) could get objective feedback and detect biased feedback. Imagine if meetings and coordinations had some indication of whether they were drifting off into left field. Imagine if families had tools to prevent some bad decisions. Even if these tools were very limited, the cumulative effects on coordinated decisions made, and actions taken, might (might, this is still very speculative) have an analogous effect compared with the effects of STEM based health, hygiene, and epidemiology on the quality of life for people for whom the formalities are otherwise invisible (e.g. how many novels or history books mention soap, its presence or absence fades away into the social-cultural narrative, but the effect of its presence or absence has a profound shaping force on what happens). To clarify, ‘1870–1970’ public health improvements’ are broadly considered to have had more of a transformative positive impact on human life than everything else in history combined, or less extremely, a massive positive impact.

How many terrible decisions have been made and then continued to be carried out because the people in charge did not know that the plan was not working? This is the horrifyingly common Potemkin Village extreme of the yes-man pattern, where people will go to quite spectacular lengths to distort the information that other people get (including creating mobile-fake-towns to distort the data measured and reported by inspectors, from which the term ‘Potemkin Village’ gets its name: a literal mobile-fake-world for inspectors to be surrounded by wherever they travel to).

And there is a cognitive slippery slope that enters in when people start deliberately changing language and changing data: effects of disinformation,

as in the opening quote attributed to Upton Sinclair,

“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

There is a perhaps elusive pivot within this quote that is very important.

A more moderate statement would be: “It is difficult to get a person to say something, when their salary depends on them not saying it.”

or in machine-learning terms:

“It is difficult to get a participant to say something, when positive reinforcement depends on them not saying it and negative-reinforcement triggers the participant’s deletion.”

But Sinclair interestingly extends this effect to perception and belief.

So let’s put this back into our time-travel experiment: What if people in the past had a tool that could help protect against distortions in perception and belief caused by distortions and mis-regulations of language?

Perhaps this is like the opposite of Geordano Bruno being literally burned alive in public for saying that other solar systems with planets might exist, or Galileo being locked in a tower for life for accurately describing what he saw when he looked through a telescope.

(Note: Simply being a famous quote does not make this true, it is up to you to check and see if you think this is valid and what you do after that.)

People Pushing Boundaries:

On the other hand, imagine an academic institution, either a school or a institution in charge of exams, creating and using AI for deliberate distortion of language, in a Nitzchian power-grab magnifying the harmful effects of one incompetent or malicious teacher in one classroom lacking the checks and balances of feedback to systematically reward and punish larger numbers students based on deliberate illogical and arbitrary distortions of language and disinformation. Again, Owell’s example may be so extreme that it does not seem to apply to lesser examples of deliberate language distortion (where language distortion and thought and belief and perception distortion can be closely connected).

People will push boundaries, and as history is our guide I am highly confident that this has already been attempted on some level in the same year 2023 (hopefully not successfully). Whether it is based on zealotry, malice, financial greed, catastrophic stupidity and short-sightedness, or whatever other shortcoming, the results of deploying disinformation are destructive, and an act of deliberately deploying a force of destruction against children is just that: an act of deliberately deploying a force of destruction against children, where children are literally the future of the world.

To be pragmatic, in a marginal and unobtrusive way a divergence of language and jargon between and across groups is completely unavoidable and harmless (even within disciplines terms too often have multiple definitions). And over-homogenizing can certainly be inappropriate: remember Procrustes?

In fact, this organic-messiness of ever-evolving language (and concepts) is part of why GOFAI (Good Old Fashioned AI) found real life tasks ultimately insurmountable, and why statistical Machine Learning was able to do many more narrow tasks than GOFAI, but not able to extend deeply into other areas and uses (such as language meaning).

The existence and value of diversity is one of the very important things that feedback can protect. Imagine requiring everyone in the world to wear one-size-fits-all clothing? Imagine a race where all the runners have to wear size 42cm shoes. Imagine a market where only one product is permitted: Everyone in the world has to sell peanuts. From ancient times to shockingly recently in Chinese history, how many people have starved to death because a strong-man (and it usually is a man…why could that be…) forces all farmers to follow a uniform sanitized dogma that they think sounds very nice? And how long does that continue and keep being covered-up after people know it is happening? And yet, heartbreakingly, many H.sapiens-humans, like moths to a flame, are irresistibly attracted to hyperbolic monism and will devote their lives to enforcing ‘the one true solution.’

I very highly recommend Timothy Snider’s book about Eastern European History “Black Earth,” the name being a reference to the rich black fertile soil (of Ukraine) while, ironically, deliberate Soviet political mismanagement resulted in millions of deaths by starvation. And yes, ‘deliberate mismanagement’ was not a typo. Systems without feedback can become very tragically broken.

As Wikipedia summarizes it, in Neal Stephenson’s “Diamond Age” (which to be honest I did not finish reading and am off-put by his ill treatment of women-characters) there is a science-fiction future where society is compartmentalized into “Phyles” or ~tribes which can be based on shared language and perceptions of reality, not only geographically and temporally (back to measuring sticks and clocks) where a person happens to be located as in historical ‘tribes.’ How people form communities influenced by print media and the internet, and the potential formation of ‘bubbles’ and ‘echo-chambers’ that lack feedback are likely important social processes that we have much to learn about.

Whether we can find good tools for monitoring and navigating these interlocking levels and sets of language, perception, and belief, feedback is feedback and the tools of STEM will be of great value.

Note on term ‘disinformation’

The begrudging contemporary English term ‘disinformation’ (which American optimists did not want to admit was real or possible) effectively comes from the deliberate Russian “dezinformatsiya.” A disproportionate amount of the literate about this somewhat recent concept named as such comes from “information war” and “cyber war” topics and literature, as well as Soviet-era events where the Soviet state deliberately weaponized “disinformation.” After the soviet era ended public disinformation events came to a head, such as reported disinformation attacks by Russia’s state government against the population of the US from 2016 to 2020 to generate and exacerbate election-outcome related dischord, tension, confusion, extremism, etc. So books using that specific term are often Russia related. Interestingly, when Americans do admit the phenomena is real, it is even more rarely if ever used to refer to actions, programs, and policies carried out by the US executive branch between 2016 and 2020. While the term itself may not have been used historically or internationally, similar deliberate methods may have been used in history, outside of the ‘cold war’ setting, and have been described in different languages with different terms. And as the effectiveness of disinformation attacks has been publicly demonstrated, their use by states, regimes, institutions, and individuals, is now global and more general.

I would argue that disinformation as a set of system-processes is more general than suspected.

For detail and nuance I recommend reading not only Richard Stengel’s well polished “Information Wars,”

https://www.amazon.com/Information-Wars-Global-Against-Disinformation-ebook/dp/B07R6TSX9Z/ which is also delightful to read for perspective after reading, say, Alexis de Tocqueville’s “Democracy in America” and “The Federalist Papers”

https://www.amazon.com/Democracy-in-America-audiobook/dp/B0044KQ0SI/

https://www.amazon.com/The-Federalist-Papers-audiobook/dp/B004HFK14E/

but also books about eastern Europe and the history of the internet, such as:

The brilliant Fiona Hill who is a great writer and speaker:

https://www.amazon.com/There-Nothing-You-Here-Twenty-First/dp/B08XY9782K/

her, authoritative, book on information in politics:

https://www.amazon.com/Mr-Putin-Operative-Kremlin/dp/B084L1179W/

And Matt Potter’s super-fabulous book with a not-great-title (very well researched with great insights into post-1990 history of Eastern Europe).

https://www.amazon.com/We-Are-All-Targets-Unleashed/dp/B0B831PN81/

Scott J. Shapiro’s book on information ne’er do wells is very well written with, nicely, the opposite of a fear-inducing agenda:

https://www.amazon.com/Fancy-Bear-Goes-Phishing-Extraordinary/dp/B0BG5WJS57/

“Sandworm” has important parts of the puzzle but is a bit gosh-wow unnecessarily in style.

https://www.amazon.com/Sandworm-Andy-Greenberg-audiobook/dp/B07RGRTZM6/

And of course Joseph Menn’s history of cybersecurity…

https://www.amazon.com/Cult-of-Dead-Cow-Joseph-Menn-audiobook/dp/B07RX456JM/

The topic of cybersecurity may fit very well as a final tie-in: the security of the computer-tools we use, underlying the whole narrative above, is very relevant to this topic: Feedback about how software is working or not working, and the social, political, legal, psychological, cultural, total mess that results from a researcher trying to explain a problem he found so people can fix it, is very much a part of this whole quagmire, and perhaps one of the better documented areas where very concrete STEM feedback, about very concrete STEM processes, that affect people in very STEM concrete ways, is hushed-up, and persecuted, and made illegal, and hidden, and everything people can think of doing, to deliberately obscure reality, distort and destroy the feedback message, and torture, blame, slander, and harm the messenger. This story is not over.

See:

https://www.amazon.com/Black-Earth-Holocaust-History-Warning/dp/1101903473

https://en.wikipedia.org/wiki/Clique

https://www.barnesandnoble.com/w/i-candidate-for-governor-upton-sinclair/1131072458

https://www.amazon.com/A-Mind-at-Play-audiobook/dp/B073KVK1K6/

About The Series

This mini-article is part of a series to support clear discussions about Artificial Intelligence (AI-ML). A more in-depth discussion and framework proposal is available in this github repo:

https://github.com/lineality/object_relationship_spaces_ai_ml

--

--