AI Revolution 101
Pawel Sysiak
1.6K82

“ANI systems as they are now aren’t especially scary. At worst, a glitchy or badly-programed ANI can cause an isolated catastrophe like”²¹ a plane crash, a nuclear power plant malfunction, or “a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected) … But while ANI doesn’t have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane that’s on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI.”²²

Our ability to evaluate and validate Artificial Narrow Intelligence is already slipping — with fatal results in aviation, automotive engineering, and medicine. It’s far too early to declare ANI competent to take over tasks best performed under constant, careful and skilled human supervision.

Emirates Airlines, the world’s largest operator of arguably the world’s most highly-automated and competently-built passenger aircraft, tbe Boeing 777, just lost its first aircraft. It was a Boeing 777 which, on landing, made a sharp turn WHILE performing an autopilot landing — one of the 777’s huge engines scraped the ground, destroyed itself, caught fire, set fire to the wing, and as the plane came to a rest, the entire aircraft caught fire. Fortunately, all the passsengers and crew were able to deplane through the emergency exits without a single death.

What happened?

Aviation Week and Space Technology magazine carried this as breaking news on their website, and several readers chimed in. We were all like the blind men in the Indian fable, all touching an elephant in its various locations and coming up with various guesses as to what they were touching. I’m a systems analyst with only general clues to failure modes in passenger aircraft, so I counselled that we wait until the flight recorder was recovered, so we could find out what the bridge crew were saying and doing and possibly get telemetry on the control inputs of the aircraft.

Another reader, a former military pilot and current passenger aircraft pilot, had another perspective: too many aircraft pilots are depending on autopilot for landings.

In a more tragic event last year, a new Airbus m400 Atlas military transport crashed during testing just before it was to be the third such aircraft delivered to the Turkish Air Force. Four of the crew were killed; the remaining two were seriously injured.

According to the wikipedia article on the incident,

“On 3 June 2015 Airbus announced that investigators had confirmed “that engines one, two and three experienced power frozen [sic] after lift-off and did not respond to the crew’s attempts to control the power setting in the normal way. Preliminary analyses have shown that all other aircraft systems performed normally and did not identify any other abnormalities throughout the flight.” The key scenario being examined by investigators is that the torque calibration parameter data was accidentally wiped on three engines as the engine software was being installed at Airbus facilities, which would prevent the FADECs (my note: FADEC is the term for the highly automated ANI engine controls) from operating. Under the A400M’s design, the first warning pilots would receive of the engine data problem would be when the plane was 120 meters (400 feet) in the air; on the ground, there is no cockpit alert.”

ANI for aviation isn’t perfect. It arguably isn’t as good as the very best pilots, and as more and more pilots rely on autopilot software for routine aviation tasks, some experienced pilots worry that the pool of skills necessary among pilots to backstop autopilot failures is diminishing.

Of course, no autopilot is proof against negligent pilot operation, such as the 2009 incident in which Northwest Airlines Flight 188 overshot its destination airport by over 100 miles because pilot and co-pilot were reportedly distracted by other tasks on laptop computers they’d brought into the cockpit. When the pilot and copilot failed to respond to the worried control tower, other pilots, and a chiming radio message from Northwest Airlines, NORAD was ready to send air defense fighters out to intercept the aircraft.

In the same way, cruise control isn’t as good as the very best automobile drivers. Even advanced expert systems, imagery-guided car autopilots like Tesla “Autopilot” can’t be considered perfect ANI for automobile driving yet, and even it can fail because it was never intended to replace an attentive driver at the controls of the car.

A Tesla Model S sedan was suddenly and lethally turned into a convertible as it zoomed under a semi-trailer which had changed traffic lanes in front of it. According to Tesla, the car’s ANI for driving on limited-access highways “saw” the white semi-trailer’s body it was passing under at 65 miles per hour as part of the white sky, and not as other traffic, and did not slow or brake to avoid it. Instead, it passed under the trailer. The part of the car above its body, its windshield, windows, roof and the driver’s head were all sheared off.

Let me add quickly that Tesla hasn’t been huckstering their driving ANI as perfect. Their version of an “autopilot” for highway driving only activates after the driver indicates on the car’s touch-screen control that he or she knows that the software’s in beta (late developmental) status, and that the driver is responsible for his or her own safety and that of the car’s passengers and that of other drivers on the road around the car.

According to an Associated Press report, the semi-truck’s driver had stopped his truck and gone to the grisly scene where the car had stopped after passing under his trailer, only stopping after hitting a telephone pole — and heard a Harry Potter movie playing where the driver could have seen it, had his head not by that time been in his car’s back seat. While the National Transportation Safety Board hasn’t made an announcement as far as I know in this case, signs point to an off-task driver unaware of what his car was doing. ANI was oversold to this man, certainly not by Tesla, but perhaps by his own faulty expectations of what the system could do.

Another special case of ANI failure I (as a clinical data analyst) thought we’d be seeing fewer of, as sophistication grew in its use, is that of radiation therapy. Reports of a series of fatal radiation overdoses in the 1980s delivered by the automated Therac-25 linear accelerator had, I thought, raised consciousness among those we trust to deliver radiotherapy, and the manufacturers of these devices, to eliminate such mishaps. Surely, this horrible episode had caused designers of ANI in radiotherapy to make such mishaps impossible, just as the nation’s nuclear defense systems are designed to “fail safe” and not start a nuclear war by accident.

But there are a disturbing number of recent deaths owing to automated radiation therapy, in which a specialized form of Automated Narrow Intelligence delivers what’s supposed to be a relatively safe dosage of radiation specifically to a cancerous lesion, and instead delivers a harmful or even fatal radiation dose to the patient.

Two fatal incidents in which computer-controlled radiation beams dealt fatal radiation injuries to cancer patients in New York have recently been uncovered by Walt Bogdanich as part of his recent New York Times exposé on radiation injury to patients from complex computer-controlled radiotherapy systems.

Bogdanich reports :

“Linear accelerators and treatment planning are enormously more complex than 20 years ago,” said Dr. Howard I. Amols, chief of clinical physics at Memorial Sloan-Kettering Cancer Center in New York. But hospitals, he said, are often too trusting of the new computer systems and software, relying on them as if they had been tested over time, when in fact they have not.”

In one of the two horrifying incidents Bogdanich cites, ANI was trusted to a degree it should never have been by the people responsible for the safety of the patient being treated:

“The investigation into what happened to Mr. Jerome-Parks quickly turned to the Varian software that powered the linear accelerator.
The software required that three essential programming instructions be saved in sequence: first, the quantity or dose of radiation in the beam; then a digital image of the treatment area; and finally, instructions that guide the multileaf collimator.
When the computer kept crashing, Ms. Kalach, the medical physicist, did not realize that her instructions for the collimator had not been saved, state records show. She proceeded as though the problem had been fixed…. “

The report continues,

“Even so, there were still opportunities to catch the mistake.
It was customary — though not mandatory — that the physicist would run a test before the first treatment to make sure that the computer had been programmed correctly. Yet that was not done until after the third overdose.
State officials said they were told that the hospital waited so long to run the test because it was experiencing “a staffing shortage as training was being provided for the medical physicists,” according to a confidential internal state memorandum on the accident.
There was still one final chance to intervene before the overdose. All the therapists had to do was watch the computer screen — it showed that the collimator was open. But they were not watching the screen, and in fact hospital rules included no specific instructions that they do so. Instead, their eyes were fastened on Mr. Jerome-Parks, out of concern that he might vomit into the mask that stabilized his head. Earlier, he had been given a drug known to produce nausea, to protect his salivary glands.”
The result for Mr. Jerome-Parks was that the Varian radiation therapy software failed to prevent, and its operators failed to prevent a lethal dose of radiation to his brain and nervous system. In another incident with the same software (before it was modified to “fail safe” and not irradiate a patient unless the collimator had in fact been restricted to only deliver radiation to the desired treatment area), the New York Times report states “therapists tried to save a file on Varian equipment when “the system’s computer screen froze.”
The hospital went ahead and radiated the patient, only to discover later that the multileaf collimator had been wide open. The patient received nearly six times her prescribed dose. In this case, the overdose was caught after one treatment and the patient was not injured.”

Safety-critical Artificial Narrow Intelligence is failing deadly, not failing safe, because we’re already being seduced into accepting its infallibility, either by assurances from manufacturers who aren’t aware of all the failure modes of their merchandise, or users whose complacency and willingness to abandon responsibility for the tasks ANI is only supposed to help them perform results in disaster.

Like what you read? Give Jean Lafitte a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.