Technology, Biology, and AI Goals

Geoffrey Gordon Ashbrook
10 min readNov 20, 2023

--

computer science and political philosophy

2003.11.17 Geoffrey Gordon Ashbrook

Assumptions about technology and biology in 2023 seem to be backwards in a few ways. The twist to focus on here is that while biological intelligence is the goal for AI (as many assume, and as Michael Wooldridge says in his excellent overview book, which I do recommend), biology without technology very significantly under-performs in many significant areas.

What I mean by “technology” is admittedly a bit broad here, but hopefully I can make a few clear-enough points without getting lost in terminology:

1. Biologically speaking, H.sapiens humans have been around for ~millions of years, and yet only very very recently, and very very begrudgingly, are basic STEM-related discoveries being made. Changes in the history of science in the last 200, 100, 50, 25 years are so huge that is frankly puzzling how some things have taken so long, not to mention that in 2023 there is still no general-unified-STEM concept, which is a huge red flag. (Dr Becky Smethurst’s “A Brief History of Black Holes” is a so-wonderful science-history book and also highlights just how little we knew about the universe until amazingly recently.) The humanity that we think of in 2023 as ‘biological-humanity with no computer-integration’ is not behaviorally or culturally representative of what most of humanity was like for most of history, with the increasingly precariously looking agenda of trying to define humanity against technology.

2. Stephen Pinker (most specifically in “The Better Angels Our Nature”) and Jared-Diamond (in a gazillion books, read them all) have pretty soundly made the case that biological life without technology is (culture aside) a bit more Hobbsian than we like to romanticize. Even with all the tradeoffs and the occasional Norbert-Wiener-esk or Owellian misuse of technology, the overall trend towards improving most peoples’ lives across all known benchmarks is very unambiguous: The combination of biology and technology is simply better, including better for biology. (Obviously this is a broad-stroke simplification in this paper here. Pinker’s idea is that the overall trajectory of more Enlightenment-STEM is unambiguously in a social-societal-life-improvement direction, not that the challenges of improvement are simple or trivial or that processes of integrating STEM into society are simple or automatic or already perfect.)

Yet the overall orientation (of H.sapiens-humans to technology and AI in 2023) appears to be:

A. That we need to protect good-pure-biology from bad-technology (in a way so much like Jean-Jacques Rousseau that John Maynard Keynes’s ghost is no doubt smirking at us right now), as if life were a day to day struggle to keep projects going by pushing STEM away from them.

B. That in order for AI to act in an intelligent and competent and civilized way, AI must emulate romantic biology and avoid the poisoned-apple of technology. (For example the famous letter signed by many experts in technology, saying that in reaction to the rise of OpenAI in 2022 that H.sapiens-humans should not only ban and prohibit accelerated-deployment of untested technology but ban research itself…yes, ban study and peer review and analysis and testing, and evaluation: stop STEM to improve functionality. Think about that. Does that make sense? How is that supposed to work? If we are identifying research and discussion and observation and testing by even academic specialists as the cause of problem-ness, how are things going to improve during a self-imposed darkage? When totalitarian dictators in history have purged academics and scholars it was usually with the cynical aim of killing all the smart people so the leftover people would be easier to control by force…not usually with the delusion that this would somehow improve academic research. But in this case we seem to be trying to create a spontaneous romantic-nonlocal-timewarp-intelligence-boost and an overall global increase in wisdom by freezing, ending, destroying, STEM. How exactly is that supposed to work? How is that supposed to make the world smarter and better informed, by not looking at things, not testing things, and not talking with each-other about what we’ve learned? Looking at the problem is not the problem.)

That is all…very odd.

Managing Modernism

And, at the same time, when it comes to negative effects of introducing, or quickly introducing, new technologies and STEM into social-cultural-equilibria there is the whole fascinating yet rarely discussed topic and history of “modernism” as a social-cultural pejorative (indeed likely leading to social, psychological, causes of the world wars and political extremism). And yet again ‘naming things’ is a problem and the term ‘modern’ is overloaded and problematic to discuss:

1. now: “modern” meaning contemporary relative to when the author is speaking;

2. futuristic: “modern” meaning the future beyond the present relative to when the author is speaking;

3. “modern” meaning a cultural development stage in sequence or cycles;

4. nihilistic: “modern” meaning a form of mental illness; and likely more as well).

5. art: “modern” referring to art movements in ~1890–1940 (there are various art forms sometimes called ‘modern’ ranging from photorealism to minimalism to art deco and abstract expressionism and many others, which are often completely different from each other.)

6. 1890’s Europe: ‘modern’ referring to one specific time and place in history

etc.)

Here the topic is ‘modern’ as in ‘Modern Malaise’ or cultural despair, mental suffering, and ‘feeling of dislocation’ and a ‘nihilistic’ dissolution of meaning widely described during a specific time period around the late 1890’s, and through at least Thomas Sterns Elliot’s The Wasteland in the 1920’s, which was self-described or self-diagnosed as the harmful effect of change and progress separating the human mind from the familiarity of “traditional” life and culture (with this edenic joyous past golden age not being specifically defined).

A Phantom Fear of the 1890s?

While we should better understand and plan and manage how cultural traditions and STEM developments interact (likely involving STEM rigor to do that properly), where is the mass-panic that future technology will lead to a repeat of the social-cultural-psychological problems amply written about as the problem of pejorative Modernism from (very roughly) 1860–1940? Nowhere. If we really fear the bad influence of technology on biology…why aren’t we fearing a repeat of known social problems from the past?

Aside from the incongruity of there not being a common concern of a repeat of a recent world-warping massive problem, teaching AI (and all people-participants) about the problem of modernism is probably a good idea.

As an admittedly very broad-brush overview (that super-snob experts are likely to find insulting, sorry) a quite nice ~short and bitesize (as in several hours, less than 24…) romp through topics including the puzzles of modernist-extremist-despair is

Dr. Lawrence Cahoone’s “The Modern Political Tradition: Hobbes to Habermas” lectures (on audible!). The lectures are a very fun walkthrough of the context and turmoil of Modernism as well the topics of political philosophy that should likely be more a part of dialogue going ahead. As usual with any survey, it is good to get deeper detail wherever possible and to form your own opinions about the data provided (and read Shakespeare in the original, Read Tony Jundt and Timothy Snider, read Sir. Eric Ashby, read new books, read old books). That said, Cahoone’s mini-lectures are delicious (and you can zip through at ~1.5 speed).

The topic of how introducing immature technologies and mis-understanding (or mis-using) the label of ‘science’ is very real and important and fascinating. We should be more familiar with some of the stranger parts of the French revolution for example, where mobs of people created cultish temples of science as if going to science-cult-temple to pray would bring ‘science-blessings’ to society in the same way that worshiping at a catholic church would bring graces of the divine from alien alternate dimensions (again, this is an oversimplification to compress a chapter of history into a sentence, see: Russell Shorto’s “Descartes’ Bones: A Skeletal History of the Conflict Between Faith and Reason”). There are many examples where people have tried to replace time-tested traditions with premature-technologies with unhelpful or terrible results (such as pressuring women to stop breastfeeding ‘in the name of science’ because of some man’s utterly non-STEM-based desire to bully other people). The problem with these tragedies is not ‘too much STEM,’ rather it is too little STEM. ‘Science’ in 1790 was more a distant precursor to what we call science now. We need all the advances that we have and more yet to be aggregated to properly handle data and results, testing and procedures. Not having better tools to understand whether you are disrupting something that you really should not be disrupting is not going to give you better tools to understand whether you are disrupting something that you really should not be disrupting.

We should spend more time not only understanding but furthering our evolution from often hyper-masculine cargo-cult notions of science-positivism (that litter the history of science and pre-science) to a, in many ways significantly different, more mature general STEM that includes not only an interdisciplinary and diverse fields but a more nuanced, respectful, and less cavalier view of a large and diverse world not to be oversimplified or reduced just for the agenda of reduction for reduction’s sake, and certainly not to be seen as a target of sporting conquest.

Disturbance Regimes, Biology, and Ecology: STEM & System Collapse

While this is a slight side-track and a shameless plug for my main research topic over the years, it should make sense that a concern for improving society and questions for how STEM and technology are involved should include some focus on disturbance regimes and system collapse.

Like moths to a flame, biological H.sapiens-humans like violence, bullying, investing in bubbles, exciting ‘revolutions!’ that end in mind-bending destruction, viral contagion just to enjoy the fire-works explosiveness of it, etc. STEM describes these as ‘not-a-good-idea’ based on historical and measurable evidence.

Ecology (which is actually rather close to computer science and data science being focused on N-dimensional-matrices of non-biological AND biological systems (yes, that does sound like embedding vectors!) provides a foundation of research on disturbance regimes. Whether it is the “viral” language patterns researched and described by William Seward Boroughs (but usually attributed to Richard Dawkins as “memes”), or forest-fire management, or disruptions to traditional arts including dance, music, story-telling, cooking, textiles, design, horticulture, etc., or the demand-distortions that plague industries such as publishers who really do not want to be reduced to being vendors of base pornography and hate speech or to rely on unpredictable ‘viral’ fad products, or political extremism, or epidemiology and the spread of biological viruses, we should try for a STEM approach to not only managing disturbance regimes and extra-regime disturbances individually but more generally modeling system collapse.

While most people find this topic simply too boring and tedious, I am very optimistic about feasibility and practical low-hanging-fruit in this area, and I strongly recommend that these topics be part of the metrics and measures when talking about STEM and AI and society. There is a lot of natural dove-tailing around generalized STEM, managing system collapse, and human-style ethics (ethics of the good kind, not the pathological things that H.sapiens-humans actually do).

Let’s Make Some Plans

Based on this, what would be a more common-sense approach to teaching AI about integrating technology and biology for better outcomes and minding the quagmires of nihilistic modernism-malaise?

1. Teaching AI history

2. More integration of STEM, Biology, and AI

3. Teaching AI to use technology to help biology

4. Articulating biological-social-health-management goals in a context of known past problems

5. Generally using historical data to ground and specify concerns and priorities

6. Combine AI technology with a general-STEM approach to studying disturbance regime management and general system collapse.

-

1. not isolating biology from STEM

2. not isolating STEM from Biology

3. not isolating AI from STEM

4. not isolating history from STEM

5. not isolating biology from history

6. not focusing on fictional problems

7. not ignoring actual known problems

8. not getting things completely backwards

9. not being so vague that you are literally saying nothing clear at all

Our target is not some kind of 1800’s “romantic” undefinable cult-dogma of anti-technology-purity, or some pre-technology-biology that measurably out-performs ‘technology’ (whatever that means), but rather the health of a planet and network of societies that are built with and knit together by interlocking STEM areas. Do we want a world with no environmental testing, or better environmental testing? No health care systems, or better health care systems? No recycling, or better recycling? No energy efficiency, or better energy efficiency? Improved voting infrastructure, or no voting infrastructure? Better, more auditable, education, or no education (or black-box education)? More institutional accountability, or less accountability? More well-defined metrics for ESG type projects, or fewer (or no) well-defined metrics? More biodiversity and a better understanding of it, or less biodiversity and more ignorance about it?

While it may be a challenge to not disrupt value, function, and meaning, either with new developments or with blind applications of existing or transplanted methodologies (foot-binding? What could possibly go wrong…), STEM tools are vital in evaluating, supporting, and maintaining the value, function, and meaning that has been so hard won and so tragically depleted by history’s ravages.

See:

(Regarding the related strange-assumption that computer science and AI are not closely connected to biology (in history, research, innovations, etc., see: Biology, Psychology, Math paper.

https://medium.com/@GeoffreyGordonAshbrook/biology-psychology-math-ai-broad-or-ai-narrow-0e0a2a435ba8 )

[Keynes’s observation that we tend to be influenced by past views more than we are aware.]

“Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back”

― John Maynard Keynes

https://www.amazon.com/stores/Jared-Diamond/author/B000AQ01ZS

https://www.turing.ac.uk/people/researchers/michael-wooldridge

https://www.amazon.com/Brief-History-Artificial-Intelligence-Where/dp/1250770742

https://www.amazon.com/Better-Angels-Our-Nature-Violence/dp/0670022950/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=1700221628&sr=1-1

https://www.thegreatcourses.com/courses/the-modern-political-tradition-hobbes-to-habermas

https://medium.com/@GeoffreyGordonAshbrook/biology-psychology-math-ai-broad-or-ai-narrow-0e0a2a435ba8

https://www.amazon.com/Modern-Political-Tradition-Hobbes-Habermas/dp/B00KNLZWEA/

https://en.wikipedia.org/wiki/John_Maynard_Keynes

https://www.amazon.com/Brief-History-Black-Holes-everything/dp/1529086701

https://github.com/lineality/definition_behavior_studies

https://www.amazon.com/Descartes-Bones-Skeletal-History-Conflict/dp/0307275663

About The Series

This mini-article is part of a series to support clear discussions about Artificial Intelligence (AI-ML). A more in-depth discussion and framework proposal is available in this github repo:

https://github.com/lineality/object_relationship_spaces_ai_ml

--

--