When is AI good AI?

Andrew Zolnai
Zolnai.ca
Published in
3 min readNov 28, 2023
Source

Following When is AI not AI? and Stop AI scraping your Internet data, my blog post showed some “AI for good” here, on simple uses of AI for a re-purposed non-profit Cambridgeshire.ai. Here follows a cogent use, and here is a follow post When is IT bad IT.

Update: here is Part II on an personal yet telling level.

The banner synoptic view of mass extinctions by the numbers shows not only the stark variations in extinction events — incl. our impending one — but also the sparsity of the information collected by none other than the National Science Museum (more on my other Medium channel here).

All this followed the hallowed scientific investigation principles to painstakingly collect evidence on the ground, to publish ensuing theories that evolve over time as data and science evolve, and to argue various hypotheses acc. to all of the above plus ideas from other fields. The realm of mass extinctions has two main proponents: mass volcanic event(s) or an asteroid impact, both of which stopped most sustainable life at the time.

This is what is called forward reasoning based on information at the time: think of it as a video with only a forward button, perhaps a fast forward when summarising it all. But what if one could iteratively move the video back-&-forth to refine the concepts? Same as previously unseen snapshots are captured by stepping thru videos frame-by-frame backwards-&-forwards, wouldn’t it be neat to enhance scientific investigations & capture missteps like in the video metaphor?

Along comes AI that can scour masses of information the way not only human brain may not comprehend, but also time would run out before we’re done even if we grasped it all. That of course entails a mental adjustment, not only to admit that we may not be able to grasp it all even as teamwork over time, but also that we may take ourselves out of the equation altogether once we trust the algorithms to work properly.

On a tiny scale I have recently appreciated how important it it for prompt engineering to be able to phrase the question properly — PE is the way you enhance internet searches by iteratively querying datasets until you hone in onto the desired info — but the key is that it allows forward and backward reasoning to develop outcomes as well as check their sources (see end discussion on this previous post).

On a far grander scale, that is excactly what a Dartmouth College grad student and his supervisor did, to review the ample evidence around mass extinction events in the geologic record. In another abstract-in-a-title paper Volcanoes or Asteroid? AI Ends Debate Over Dinosaur Extinction Event — c/w a sneaky Google advert that’s fiendishly difficult to avoid WTF — they quip “Free-thinking computers reverse-engineered the fossil record to identify the causes of a cataclysm” and furthermore “Dartmouth scientists used an innovative computer model to suggest that volcanic activity, rather than an asteroid impact, was the primary cause of the mass extinction that ended the age of the dinosaurs. This groundbreaking approach opens new avenues for investigating other geological events.” In the authors words:

“Part of our motivation was to evaluate this question without a predetermined hypothesis or bias,” said Alex Cox, first author of the study and a graduate student in Dartmouth’s Department of Earth Sciences. “Most models move in a forward direction. We adapted a carbon-cycle model to run the other way, using the effect to find the cause through statistics, giving it only the bare minimum of prior. In the end, it doesn’t matter what we think or what we previously thought — the model shows us how we got to what we see in the geological record,”

The article details what I see as a sea-change in modes of thinking and opportunities that AI will bring about: Isn’t this the best example of AI for good, combining human intelligence with computational power? It also reiterates my end quip in my blog (here) on this topic:

… to echo Mark Twain, “the reports of [human intelligence’s] death [vs. AI] are greatly exaggerated”.

--

--