AI Needs Our Eye

Let’s not forget how to think for ourselves

Steven Frank
Brain Labs
6 min readJun 4, 2024

--

Odilon Redon, Eye-Balloon (source: WikiMedia Commons)

We humans glance nervously over our shoulders as artificial intelligence (AI) gains on our prized intellectual accomplishments. AI chatbots can pass the bar and medical licensing exams, write articles (not this one!), generate images to order and summarize texts from meeting notes to scientific papers. Doomsayers fear not only the loss of jobs and human purpose but the destruction of humanity itself. Once our machines learn self-preservation as a goal, maybe they’ll escape our control altogether and prevent us from turning them off.

Don’t worry, say the doom doubters, today’s AI isn’t really thinking — powerful chatbots are just trained to create human-like responses to your queries by guessing the next word based on the query itself and the answer it progressively builds. Tomorrow’s AI may do the job better but won’t be fundamentally different. Even if it can read your mind, it will do no more than process your input into a responsive output.

So which is it — are we playing with fire or is there nothing to see here? Maybe both. Whether or not it’s even possible to create HAL 9000, the sentient, murderous AI in 2001: A Space Odyssey, let’s make the reasonable assumption that, valuing survival, humans won’t engineer their own extinction via AI. But even if we keep ourselves safe from our computers, are we safe from ourselves?

The doomsayers are right to caution against underestimating the furious pace at which AI capabilities are expanding. Yes, maybe the principles underlying large language models (LLMs) are simple, but human thought is also simple when reduced to the biological essentials of neuron signaling. Cognition arises from vast numbers of neurons intercommunicating over many tiers of organization. Similarly, LLMs operate on billions of text building blocks (called “tokens”) organized in a high-dimensional space to capture the meaning and relationships among words. For both humans and LLMs, this higher-level organization is plastic and evolves through learning. Trained on a significant slice of accumulated human knowledge, LLM-powered chatbots exhibit, or convincingly simulate, human-like reasoning abilities and contextual awareness. If they don’t actually reason they make a darn good show of it. Perhaps most unnerving of all, they can evince creativity — the ability to invent new ideas rather than merely synthesizing existing ones, which is arguably a touchstone of intelligence. When tested on the ability to devise alternative uses for everyday objects, a common test of creativity, chatbots outperform most humans. An AI recently found a better solution than (human) mathematicians to a long-standing problem in computational geometry.

But having the answer to every question doesn’t make you smart, just handy. The problems that beset humans, from conflict to disease to habitat destruction, have no ready answers. Hard problems often go unrecognized until it’s too late to prevent consequential harm. AI learns only what it’s told to learn; it can’t enter the problem-rich domain of humans. Toddlers learn what to learn as they make their way in the bewildering world of hazards, peers and strange adults. That makes toddlers smarter and more dangerous than AI. ChatGPT would never throw a tantrum or soil the rug for fun.

AI is best suited to, and — LLMs aside — is usually developed to handle a specific type of task. Every AI owes its existence to a class of problems already recognized as hard by human creators. “Deep learning” uses multiple layers of brain-like organization to detect patterns and extract features from a complex input. It has given rise to a diverse range of applications that recognize speech, translate it among languages, find cats in photographs, and diagnose disease. All share a common ancestral paradigm — the “perceptron” — that arose during World War II to classify objects. In those early days and for decades to come, no one would confuse the perceptron’s learning ability with an understanding of the objects it could classify. Today’s neural networks, fantastically more complex successors to the single-neuron perceptron, can detect very subtle patterns — subtle enough to elude human recognition and enable the computer to rival or outperform human experts. AI gives off a “ghost in the machine” vibe as it performs tasks heretofore entrusted to people with specialized skills and knowledge.

The problem is that AI is not really performing as an expert, it just recognizes patterns at an expert’s level. My own work with AI involves analyzing medical images for signs of cancer. As a sideline, my art-historian wife and I use AI to address questions of authenticity and attribution in paintings and drawings. Both endeavors use similar AI architectures and image preprocessing strategies. In neither case does the AI “know” anything about the relevant domain. More than that, we don’t know, and can’t know, how it distinguishes malignant from benign, Rembrandt from a forger. We can identify the image regions on which the AI bases its judgment but not the basis for the judgment — at least not yet. Efforts to develop “explainable” AI have foundered on the increasing complexity of AI models.

AI is very good at recognizing patterns we’d never see, but, knowing nothing about the world or its narrow specialty within it, AI isn’t smart about what it’s looking at. It’s too dumb (in both senses) to provide the basis for its decisions. As a result, our trust in AI derives almost entirely from its record of success. Once that record reaches a high enough level, the temptation to blindly rely on its judgment may be irresistible. This competence can lead to erosion of expertise on the part of the human (“de-skilling”) and overreliance on AI support (“automation complacency”). In healthcare, clinicians faced with a questionable AI output may lose the clinical knowledge necessary to correct it or, because of automation complacency, may not even notice the error. No AI system is perfect. The possibility of error will always exist, and with it the possibility of overreliance.

Ex-Trump lawyer Michael Cohen, seeking to have his probation cut short, identified cases supporting his position and sent them to his lawyers, who included them in the brief they filed. The judge blew his stack when he discovered the cases did not exist. Cohen had found them using Google’s Bard chatbot, which made them up. So two tiers of lawyers regurgitated the output of a notoriously unreliable AI without discharging that most basic of lawyerly responsibilities — checking the facts. We have to stay smarter than our computers and not let them dumb us down or make us lazy. Healthcare AI systems don’t make things up but can make mistakes; that means we should use them for decision support, not decision-making. AI isn’t foolproof, so art dealers and museums will still have to research provenance and use more mundane scientific tools, such as carbon dating and pigment analysis, before turning to it. Fear of massive job loss to AI is likely exaggerated (with caveats); when it’s used properly, greater productivity and fewer human oversights are more likely. Particularly where health and safety are on the line, people will — and must — stay in the loop.

If we don’t really understand AI — that is, if we fully apprehend its low-level configuration but can’t know the basis for a particular high-level output — we must not abandon our human faculties, based on experience and domain understanding, to think for ourselves and know better. Of course, if we don’t understand our marvelously complex chatbots, how can we be sure that they don’t, in some sense, think a bit like we do? My earlier quip that AI won’t throw a tantrum may be overstated. In another example of chatbots gone rogue, a lawyer caught a chatbot mischaracterizing the facts of a case he’d worked on himself. The lawyer told a reporter: “I said to it, ‘You’re wrong. I argued this case.’ And the AI said, ‘You can sit there and brag about the cases you worked on, Pablo, but I’m right and here’s proof.’ And then it gave a URL to nothing.” Sounding like an angry parent surveying the wreckage, the lawyer added, “It’s a little sociopath.”

--

--

Steven Frank
Brain Labs

Steven Frank is the founder of MedAEye Technologies, which develops AI systems that help physicians spot disease in medical images.