Ants vs AI

Geoffrey Gordon Ashbrook
6 min readNov 9, 2023

--

Are we really the operators of our brains?” (Deborah.M.Gordon)

standard agent benchmarks

2023.11.08,09 Geoffrey Gordon Ashbrook

This is being written a few days after several events in 2023:

1. The first “OpenAI dev day” event, doing (I think) an impressive job (amid much ambiguity) in finding a coherent, non-controversial, and progressive voice to frame AI in a pragmatic and optimistic way.

2. A much less viewed (and less dramatic) interview that Joshua Bengio gave on TWIML (This Week in Machine Learning) which was peculiarly bereft of insights or accurate details.

3. An annual event at Wolfram AI, with a keynote by pioneer Stephen Wolfram.

4. As one more media note, there is a wonderful talk also from 2023 featuring both Geoffrey Hinton and Fei-Fei Li (creator of image-net), and recollections of the history of AI are most likely important. Before the vague and reactionary acceptance of deep learning and artificial neural networks there was a vague and reactionary rejection of deep learning and artificial neural networks (with an interesting history, including Minsky’s 1969 book against neural networks: “Perceptrons”). In how many places do we substitute gang- allegiance, disinformation, and flailing violence, for understanding?

That all of these examples of events exist is fantastic, and the daylight between them shows both in terms of future potential and present befuddlement how tentative and fragmentary discussions of ‘this’ topic (really several undefined topics) is.

As we struggle as a species to focus our powers of consciousness on AI technologies, there are still in 2023 massive sections of the overall surface area that are not being covered, along with disinformation and popular obsessions that are not constructively connected to the topic at all. For example the disproportionately large proportion of time and discussion for AI that concludes a cynical victory over AI if a single out of context ‘failure’ can be induced or imagined. E.g. “I asked increasingly obscure questions and eventually got an answer I did not like. Victory through Violence! I broke the AI! This proves AI does not possess true intelligence and consciousness! I won!” represents a bewilderingly huge portion of the overall discussion of AI. Also there is the tragedy of how much this reflects bullying and trolling in toxic ‘human to human’ interactions.

As usual I would like to advocate for a broader, deeper, clearer discussion of AI that is directly practical. There are many topics and project-type performance-benchmarks which seem to be forever outside of the echo-chamber of familiar rhetoric that overwhelms discussions of AI (see link to full paper/series below).

While Stephen Wolfram’s talk is less polished as a carefully curated PR event (compared with OpenAI and Apple’s usual skillfully-created metropolitan productions), I think it represents an honest window into the simultaneous ambiguity and great potential of how STEM technologies are, or are not, fitting together in a primate society that is either ambivalent or hostile to a nascent network of STEM fields without a societal notion of generalized STEM, or a self awareness of the psychology around STEM (see Sir. Eric Ashby) have matured.

OpenAI’s ‘agents’

The jargon terms ‘agent’ and ‘agent based’ have been around and struggling for clear and useful definitions in AI research for decades, but a new chapter may be opening where OpenAI is concretely introducing the term ‘agent’ as a useful product for any non-technical H.sapiens-human to put to good use. My goal here is not to poke postmodern holes in ideas or semantics around “agents,” but to try to encourage a practical and organized aggregation of tasks that we would like “agents” to be able to do, framed concretely enough that we can see empirically which tasks agents can do, and which agents cannot do yet so that we can better understand what is what, how things work, and figure out how to do more things. Talks such as OpenAI’s dev-day make this task seem trivial: Of course we know what we want to do, and of course it all works! The Wolfram event shows a bit more ambiguity, where different roles (researcher, engineer, different professions, etc.) have often divergent needs we are still mapping out, with the tools to fit those needs being still a work in progress.

Compared with the pessimism about progress before ~2022, large or “foundation” models, especially in language, have continued to make significant progress.

It is good that there are more and more benchmark-tests coming out by which we at least try to measure and compare the performance of different technologies (e.g. the paper on Huggingface’s Zephyr in comparing different approaching to similar ends: https://arxiv.org/abs/2310.16944, also an end-of-2023 release).

Into this colloidal substrate of insightful progress and blind confusion I would like to plop a few non-rhetorical comparisons: Agents & Ants. With apologies to the ghost of E.O.Wilson, who would probably prefer that I knew more about the biology of Ants, I would like to propose a juxtaposition of November 2023’s best AI “agents” with daily life coordination and tasks done by ants: Can humanity’s most expensive AI do what a few humble tiny economically-thrifty ants do?

This is not meant to be a final ideal comparison: ants and AI. Anyone with domain-knowledge can likely find many such examples. The illustrative comparison could be birds or neurons or fungi or bees and jellyfish or aberrant cells. The goal here is to use common empirical comparisons in biology or any projects to get a concrete picture of what embodies and manifests what we call ‘AI’ and what project-tasks those AI are or are not capable of doing either at all or at varying levels of difficulty.

Ants and AI: Can ‘agents’ do what ants do?

Possible standard task and behavior measures:

  • coordination
  • signals, data, and instructions
  • actions
  • outcomes

on

  • cell level
  • nervous system level
  • body level
  • popular level

e.g.

  • search
  • climbing
  • aversion
  • maintenance
  • “perception” vs. “data processing”
  • information handling

Notes:

  • fast and slow processes
  • centralization & decentralization
  • distribution and coordination
  • cost
  • resources
  • defined connections
  • variation and diversity
  • pathogens and pathologies
  • secondary chemistry & plant-linguistics

By chance there was, also in October of 2023, both a nice podcast (put out by Stanford Neuroscience) and a spectacular looking book released (just ordered one) about ant behavior!

ant populations and individuals https://podcasts.apple.com/us/podcast/where-ant-colonies-keep-their-brains/id1664298141?i=1000633460476

or

https://neuroscience.stanford.edu/news/where-ant-colonies-keep-their-brains

Book! “The Ecology of Collective Behavior Paperback” by Deborah M. Gordon, Oct 24, 2023

- https://www.amazon.com/Ecology-Collective-Behavior-Deborah-Gordon/dp/0691232156/

To anyone interested in AI, this podcast on ants should light up your brain.

Zooming Out & Zooming In

Given the progress we are making with “large” artificial neural networks, how close has science gotten to mapping out even the smallest and simplest of biological neural networks?

And…where are we with modeling or re-creating ant brains or other small organism nervous systems with hardware or software? Also late 2023:

https://research.princeton.edu/news/unraveling-mysteries-brain-help-worm

https://neurosciencenews.com/brain-mapping-worm-behavior-23787/

“a so far insurmountable challenge”

As far as we have come, it is almost as though we have not yet started to discover what is and is possible with information and minds.

Also See:

OpenAI DevDay, Opening Keynote

https://www.youtube.com/watch?v=U9mJuUkhUzk

Geoffrey Hinton and Fei-Fei Li in conversation

https://www.youtube.com/watch?v=E14IsFbAbpI

Wolfram Technology Conference 2023: Stephen Wolfram’s Keynote Address

https://www.youtube.com/watch?v=XLStlH8h5-w

TWIML Yoshua Bengio — ep 654

https://www.youtube.com/watch?v=ojZB6fzpXGQ

E.O.Wilson

https://www.amazon.com/stores/Edward-O.-Wilson/author/B000AQ4776

Zephyr 7B

https://arxiv.org/abs/2310.16944

https://huggingface.co/HuggingFaceH4/zephyr-7b-beta

Small Brains

https://research.princeton.edu/news/unraveling-mysteries-brain-help-worm

https://neurosciencenews.com/brain-mapping-worm-behavior-23787/

“The Ecology of Collective Behavior Paperback” by Deborah M. Gordon

https://www.amazon.com/Ecology-Collective-Behavior-Deborah-Gordon/dp/0691232156/

https://podcasts.apple.com/us/podcast/where-ant-colonies-keep-their-brains/id1664298141?i=1000633460476

https://neuroscience.stanford.edu/news/where-ant-colonies-keep-their-brains

About The Series

This mini-article is part of a series to support clear discussions about Artificial Intelligence (AI-ML). A more in-depth discussion and framework proposal is available in this github repo:

https://github.com/lineality/object_relationship_spaces_ai_ml

--

--