Sitemap
The Generator

The Generator covers the emerging field of generative AI, with generative AI news, critical analysis, real-world tests and experiments, expert interviews, tool reviews, culture, and more

AIPaL: AI as Part of Life

8 min readApr 2, 2024

--

AI-generated image (craiyon)

Artificial Intelligence is here to stay.

I doubt anybody doubts that statement.

One of the Big Questions is whether AI will ultimately be our pal or… not.

Let’s try to think up an analogy from our past: Electricity. At the beginning of the 20th century this form of power was rare and treated as some kind of magic, perhaps even dark magic. But slowly it began to spread until it became part of our everyday life, a given, without which we tend to be quite lost.

Unless you’re a physicist or an electrical engineer you probably don’t know how your electricity is generated and conveyed to your toaster. But we’ve all learned a few basic facts about electricity: what kinds of devices you can use, how and where to hook them up, how much electricity costs, and perhaps some precautions, such as don’t stick your hand in the plug and be careful with the toaster near the bathtub (well, maybe not toaster, but hair dryer).

A more recent technology we might consider is the Internet, which started to gain a foothold in the early 21st century, obtaining ubiquity very rapidly. Again, most of us are not computing experts, and yet all of us have had to learn a few basic facts of Internet life, related both to boons and to banes. There are many things we’ve learned to do with ease online — and we’ve also picked up a few precautionary attitudes (123456 is not a great password, nor is your birthday).

AI is becoming the “new electricity”, or “new Internet”, rapidly gaining a foothold in many parts of our lives (whether you know it’s there — or not).

The first order of the day is to become educated about AI, as we did with previous technologies. All of us — techies and non-techies alike — need to be in possession of some basic facts about AI.

Consider the unfortunate attorney who merely wanted to get ahead of the curve and embrace the awesome new tech.

The lawyer for a man suing an airline in a routine personal injury suit used ChatGPT to prepare a filing, but the artificial intelligence bot delivered fake cases that the attorney then presented to the court, prompting a judge to weigh sanctions as the legal community grapples with one of the first cases of AI “hallucinations” making it to court. (Forbes, Jun 8, 2023)

Hallucinations can be nice if you’re daydreaming in your backyard with a glass of wine in hand. Not so much if the hallucinating is done by an AI and you haven’t fact-checked its output.

That’s a small bit of knowledge that everyone should possess: be wary of an AI’s output, especially a generative AI like ChatGPT.

And while a “CheckGPT” might come along and fix hallucinations, or at least alert us when they occur — that’s not the (main) point. Of note is that we must stay vigilant and keep up-to-date with basic knowledge of what current AI can and cannot do.

Public Education about AI is a Must

The opportunities presented by AI seem boundless these days. Just scanning the titles of articles in serious (read: less prone to hype) academic journals is instructive. Here are some random clippings I’ve collected:

  • “Artificial intelligence is predicting the weather better than are standard models” (Nature, 17 November 2023)
  • “Food: use artificial intelligence to create new proteins” (Nature, 22 November 2022)
  • “Artificial intelligence (AI) applications in medical robots are bringing a new era to medicine” (Science, 13 Jul 2023)
  • “Leveraging artificial intelligence in the fight against infectious diseases” (Science, 13 Jul 2023)
  • “Machine learning predicts which rivers, streams, and wetlands the Clean Water Act regulates” (Science, 25 Jan 2024)

No doubt opportunities abound in almost any field you can imagine. Indeed, it seems you cannot be a scientist nowadays and not use AI.

Scrolling through those selfsame journals also points to risks:

This latter editorial piece in Science (13 Jul 2023) makes some interesting points relevant to this discourse:

What does it mean to make AI systems safe, and what values and approaches must be applied to do so? Is it about “alignment,” ensuring that deployment of AI complies with some designers’ intent? Or is it solely about preventing the destruction of humanity by advanced AI?

The article concludes with the following statement:

We are making a familiar error. Faced with disorienting technological change, people instinctively turn to technologists for solutions. But the impacts of advanced AI cannot be mitigated through technical means alone; solutions that do not include broader societal insight will only compound AI’s dangers. To really be safe, society needs a sociotechnical approach to AI safety.

The risks have breached the academic dam quite forcefully:

Much to Gain, Much to Lose

(That last sample, by the way, inspired a short story of mine:)

Hey, pal

I’ve written elsewhere about a number of issues relevant to this discussion:

My main question here is: How do we make sure AI is a pal and not a mal?

First off, I’m not sure it’s possible to be 100% sure — in fact, I’m positive it’s not.

What we need is A Plan.

Photo by Brett Jordan on Unsplash

In the proverbial “best laid plan” manner, the plan will need to change. And often. AI at present feels almost like the famous Red Queen’s race, where “it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!” I say almost because AI seems to require a bit more than “twice as fast”…

Indeed, maybe instead of a straightforward plan — obsolescent as it’s written, obsolete when ready to ship out — we should think of pathways to explore — and the kinds of professionals that are best suited to explore them. This manner of thinking underscores specialties we humans have attained — and some we haven’t, which we’ll need to establish anew.

Extant specialties I believe will come in handy include:

  • Computing scientists got us into this whole mess in the first place, and will need to continue exploring major issues regarding AI in general and deep learning in particular. The recent surge of generative AI poses many threats (as well as opportunities). The area of adversarial attacks on deep networks has been expanding and we need to keep studying this facet of AI. And plenty more.
  • Mathematicians to dig into the formal underpinnings of these newly created artificial minds.
  • Linguists will be needed since AIs now “know” language. But not quite like us.
  • Literarians to complement the linguists, bringing their knowledge of human literary accomplishments.
  • Psychologists will find fresh meadows at the boundaries between humans and this new kind of intelligence.
  • Physicians will, of course, use AI (just like everyone else). But, more fundamentally, when robotics and humans meet (or clash), I’m sure they’ll be needed.
  • Philosophers can finally field-test many of the age-old theories, can they not? Think, for example, of epistemology — the branch concerned with knowledge, which is now in the hot seat, given the emergence of new knowledge makers.
  • Educators to construct the bridge between humans and AI.
  • Historians might find analogies with how we have adapted in the past to new technologies. Maybe we’ll be able to take advantage of past lessons — and learn from past mistakes.
  • Artists represent one of humanity’s frontiers — now being eroded by AI. They’re sure to present interesting perspectives on pathways taken by AI researchers.

No less exciting are the new specialties that will emerge, though predicting them is perilous. For example, “prompt engineer” seems to have waxed and waned in the span of a year — AI simply became better adept at engineering prompts.

This is where things get very speculative…

Eh, what the heck, I’m gonna speculate!

  • AI Trainers/Teachers/Ethicists can perhaps steer AIs in a good direction.
  • AI Personality Designers to create nice AIs.
  • AI Security/Safety Engineers identify and mitigate potential risks and hazards associated with AI systems, and secure against malicious uses of AI.
  • Explainability Engineers focus on making AI systems more understandable to their human users, able to explain their reasoning, thus promoting our trust in them.
  • Bias and Fairness Auditors see to it that AI is unbiased, transparent, and fair, treating people fairly and equitably.
  • AI Governance Consultants to advise organizations and policymakers on the development of governance frameworks, regulations, and standards for the responsible use of AI.
  • Robopsychology is “the study of the personalities and behavior of intelligent machines. The term was coined by Isaac Asimov…” I just couldn’t resist throwing that one in.

Frankly, now that I’ve composed these lists, I realize that quite a few people can join the party… And I’m certain to have missed both extant and potential specialties.

Having mentioned Asimov, let me end with a quote by him that seems apt: “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”

AI might widen this gap between knowledge and wisdom.

Or, perhaps, it might narrow the gap.

Photo by Katja Anokhina on Unsplash

--

--

The Generator
The Generator

Published in The Generator

The Generator covers the emerging field of generative AI, with generative AI news, critical analysis, real-world tests and experiments, expert interviews, tool reviews, culture, and more

Moshe Sipper, Ph.D.
Moshe Sipper, Ph.D.

Written by Moshe Sipper, Ph.D.

🌊Swashbuckling Buccaneer of Oceanus Verborum 🚀7x Boosted Writer

No responses yet