Boeing 737 MAX Crashes Raise Public Distrust of Autonomous Systems
On Sunday, March 10, an Ethiopian Airlines Boeing 737 MAX 8 aircraft en route to Nairobi crashed shortly after takeoff from Addis Ababa, killing all 157 people on board. It was the second fatal crash in six months involving Boeing 737 MAX 8 airplanes. Last October, in similar circumstances, a Lion Air flight crashed minutes after takeoff from Jakarta, killed 189 passengers and crew. The latest tragedy has sparked concerns regarding the safety of the MAX 8’s AI-empowered autonomous flight systems.
Yesterday, after aviation authorities in more than 40 countries had grounded the planes, the US Federal Aviation Administration (FAA) ordered a temporary grounding of all Boeing 737 MAX aircraft operated by US airlines or over US skies, saying “new evidence collected at the site and analyzed today” had led to the decision. The order was followed by an FAA tweet: “Emergency Order effective immediately, prohibits the operation of Boeing Model 737–8 and 737–9 MAX airplanes by U.S. certificated operators.”
Investigations based on data and evidence are underway, with attention focused on the aircraft’s Maneuvering Characteristics Augmentation System (MCAS), which Boeing introduced on the MAX 8. When designing the aircraft, Boeing engineers sought to deliver a product with higher efficiency and lower fuel consumption to compete with market rivals. Their solution, according to industry publication The Air Current: “by moving the engine slightly forward and higher up and extending the nose landing gear by eight inches, Boeing eked another 14% improvement in fuel consumption out of the continually tweaked airliner.” The relocation of engines however caused the aircraft’s nose to tend skyward, which could put the plane at risk of stalling. To address this issue Boeing equipped the flight-control system with a new Angle of Attack (AOA) sensor, which in conjunction with the MCAS could automatically push the nose down if the sensor indicated a dangerous nose pitch.
The Seattle Times reports that “In the Lion Air crash that killed 189 people in Indonesia, investigators have determined that this sensor, the Angle of Attack (AOA) sensor, was feeding bad data to the jet’s flight computer, activating the system and repeatedly pushing the nose of the plane downwhen in fact there was no danger of a stall.” Federal records in the US show that in recent months there have been numerous complaints from pilots on a lack of transparency regarding Boeing’s software modifications to the MAX 8 autopilot, and insufficient testing and training on the changes.
The tragedies have raised public concerns about the safety design of the aircraft’s autonomous systems, while the 737 MAX’s autonomous flying capabilities have quickly shifted from a selling point to a perceived threat within the industry. A wave of fear regarding the autonomous system has hit other industries, including autonomous driving. The Guardian’s report on the Ethiopian Airlines crash warned, “Autonomy, however, can bring problems. It is notable that insurers considering driverless cars worry most about the period when highly autonomous vehicles will coexist with human drivers, the uncertain interface between human and artificial intelligence.”
National Correspondent James Fallows wrote, “The concern about automation in airplanes involves failure of a simpler sort: that somehow the computerized systems will misidentify where the airplane is, or what is happening to it, or what the safest maneuver would be, and thus ‘intelligently’ take the aircraft on a path straight toward doom.”
This week’s aviation disaster has alerted us all that there are urgent issues in AI-empowered autonomous systems safety. In the 2016 paper Concrete Problems in AI Safety, Google Brain, Stanford University, UC Berkeley, and OpenAI researchers argue “The risk of larger accidents is more difficult to gauge, but we believe it is worthwhile and prudent to develop a principled and forward-looking approach to safety that continues to remain relevant as autonomous systems become more powerful. While many current-day safety problems can and have been handled with ad hoc fixes or case-by-case rules, we believe that the increasing trend towards end-to-end, fully autonomous systems points towards the need for a unified approach to prevent these systems from causing unintended harm.”
Journalist: Fangyu Cai | Editor: Michael Sarazen
2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.
Follow us on Twitter @Synced_Global for daily AI news!
We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.