What Do Tragedies Like the Boeing MAX 8 Crashes Do to Perception of Autonomy?

Wisner Baum
9 min readJun 7, 2019

--

What do Tragedies Like the Boeing MAX 8 Crashes Do to Perception of Autonomy?

Two recent crashes involving Boeing 737 MAX planes that appear to involve automated systems have once again brought autonomous control to the forefront of the public’s attention. Studies indicate Americans already have difficulty accepting autonomy in vehicles and planes, and the recent plane crashes will not do anything to assuage concerns that autonomous systems are safe, especially when reports have surfaced that the organizations tasked with ensuring safety may not be entirely transparent about their procedures.

While the Federal Aviation Administration (FAA) is now under fire for how the Boeing 737 MAX aircraft received its certification, the National Highway Traffic Safety Administration (NHTSA) faces criticism for announcing Tesla cars are safer than data suggests they are. Concerns about regulatory oversight of autonomous transportation combined with high-profile accidents have the general public wondering how much faith they should put in autonomous technology and the organizations overseeing them.

MAX 737 Plane Crashes Bring Autonomy to Forefront

Two tragic Boeing 737 MAX 8 plane crashes within five months highlight concerns about autonomous flight systems. Though the investigations are ongoing, early reports indicate the two planes crashed under similar circumstances. In each case, shortly after takeoff the plane had difficulty maintaining stable vertical speed and crashed after the pilots requested permission to return to the airport when they failed to maintain control of their aircraft.

The 737 MAX 8 has a new system, called the Maneuvering Characteristics Augmentation System (MCAS), which was developed because the MAX aircraft have a larger engine, which shifted its position on the wings from current 737 aircraft. That change in position affected the plane’s ability to remain level, increasing the risk of a stall because the plane’s nose tends to lift. To counter the stall, the MAX 8 relies on information from an angle of attack sensor, which tells the MCAS whether the plane is level. If the plane is at risk of a stall, the MCAS kicks in to push the plane’s nose down.

Preliminary reports from the October 2018 Lion Air crash in Indonesia suggest the angle of attack sensor misread the plane as being at risk of a stall, unnecessarily resulting in the MCAS pushing the plane’s nose down. The pilot attempted to stop this process by pulling the nose up but was battling against the MCAS and unable to rectify the situation in time. Information from the March 2019 Ethiopian Airlines crash in Ethiopia suggests the MAX 8 in that crash suffered from a similar issue.

When Autonomous Technology Fails

The MCAS is part of Boeing’s autopilot system. Pilots say they received no warning about the unfamiliar technology nor were they trained in how to use it. When the angle-of-attack sensor malfunctioned, the system repeatedly attempted to prevent a stall despite the pilots trying desperately to regain control of their planes and attempting to understand what was going wrong.

Despite the MAX having two angle-of-attack sensors, the MCAS only relied on information from one, so if that one sensor was faulty as reports indicate, the MCAS kicked in without assessing whether the information it received seemed valid. Use of two sensors might have prevented this problem.

Issues with decisions Boeing made, such as requiring data from only one angle-of-attack sensor or not alerting pilots to changes in software, highlight how reliant the public is on companies that develop or implement autonomous technology to make the right decisions for them that will ensure their safety. While the MCAS was designed to prevent an accident, Boeing classified any failure of the MCAS as being “hazardous” at worst, meaning there would not be massive casualties, certainly not on the scale seen in the Lion Air and Ethiopian Airlines crashes. Whether the failure was classified as “hazardous” or “catastrophic,” experts say the plane should still have used information from two angle-of-attack sensors, not one.

One of Boeing’s focuses when developing the MAX 8 was in not requiring airlines to spend a lot of money to retrain their pilots, so they attempted to make a plane that handled similarly enough to the current 737s that the FAA would not require retraining. The FAA agreed to allow the technology without pilot retraining. Additionally, the agency reportedly allowed Boeing engineers to conduct safety analyses on the MCAS that allowed it to be certified, but that safety analysis was severely flawed and downplayed issues with the system.

Those decisions by the FAA and Boeing may have cost human lives.

Blaming the Pilots

Boeing has argued that pilots should have followed a checklist for dealing with the malfunction and doing so would have solved the problem. But that ignores what happens when humans face a technology they don’t fully understand or aren’t prepared to deal with. Consider just how confused people get with new technology in their cars or on their cell phones, and how long it takes to adapt to that technology. Then imagine having to deal with the fallout of that technology — only when it malfunctions and only in the air with hundreds of lives at stake — when you had no idea the technology even existed.

Public Perception of Automated Planes

Back in 2017, a UBS survey of 8,000 air travelers found that 54 percent would not board a fully automated plane, even if ticket prices were lowered because of automation. The Lion Air and Ethiopian Airlines crashes are not likely to improve public perception of aircraft automation, especially given the apparent massive failures by both Boeing and the FAA to protect passengers.

Issues with Autonomous Cars Also Impact Perception

It’s not just the airline industry that could face a struggle with public perception. Autonomous cars also face pushback, especially given recent fatal crashes involving a Tesla and an Uber vehicle, and a report that the National Highway Traffic Safety Administration (NHTSA) used flawed data when announcing that Tesla’s Autopilot feature was safe.

In 2017, NHTSA announced that Tesla’s Autopilot system — called Autosteer — reduced vehicle crash rates by as much as 40 percent, a sign that autonomy in vehicles would increase motorist safety. In 2019, however, Quality Control Systems released its own report based on the data NHTSA used and found that data was severely flawed.

NHTSA looked at the number of airbag deployments experienced in Tesla vehicles before and after Autosteer was installed. That data was used to estimate how many crashes occurred in the cars before and after Autosteer installation, per million miles driven. What they didn’t report, however, was that only a small portion of the cars included in the study gave accurate mileage from before Autosteer was installed.

This omission means that the number of miles driven before Autosteer installation is low, but the number of airbag deployments remains unchanged, thereby artificially inflating the number of crashes per million miles driven pre-Autosteer. Post-Autosteer numbers are more accurate, which makes the reduction in accidents after Autosteer seem even more drastic. While 43,781 cars were included in the study, only 5,714 had accurate mileage before Autosteer was installed, making it almost impossible to draw accurate conclusions about safety from the data provided.

Further eroding public trust, it took a Freedom of Information lawsuit before NHTSA would release the data it used in determining Tesla vehicles with Autosteer were so safe.

Public Reluctant to Use Autonomous Cars

A 2017 survey by Pew Research Center suggested that although almost two-thirds of Americans think autonomous vehicles will be widespread in the next 50 years, 56 percent say they would not want to ride in one. Among the reasons for not riding in an autonomous car were lack of trust, safety concerns, and technology immaturity. Only 39 percent of those surveyed believed autonomous cars would decrease automobile deaths.

The 2018 Tesla crash that killed a driver when the car hit a concrete barrier while the car was on autopilot and the Uber crash that killed a pedestrian will not help the public to trust autonomous vehicles.

Much like the Boeing crash, the Uber crash may have been caused by decisions the company made, including disabling the car’s emergency braking system, which might have prevented the accident.

Some Too Trusting of Autonomous Vehicles

There are others who are so trusting of autonomous vehicles that they’ll put their family members’ lives in danger testing out the features. That’s what one man did when he tested out his Tesla’s Autopilot braking system by driving directly at his wife. In a move that no one would ever recommend, the man drove up to 30 km per hour at his wife with the car only just barely stopping before it hit her. The second time he drove at her, he had to step on the brakes to avoid a collision, as the emergency braking system did not activate.

No one should ever purposely put another person’s health or safety at risk testing out the autonomous features in a vehicle.

When Technology that Could Save Lives Isn’t Implemented

Of course, the flip side to not automating quickly enough is precisely what has happened repeatedly in train tragedies. Railroad companies have been given multiple extensions of time to implement Positive Train Control (PTC), an automated system that takes over a train if the engineer doesn’t respond to alerts. PTC can slow a speeding train or prevent a train from moving down the wrong tracks, both of which can result in mass casualties.

The National Transportation Safety Board (NTSB) has identified multiple crashes over the span of 50 years, in which 303 lives were lost and 6,800 people injured, that PTC may have prevented.

In 2008, Congress passed a law requiring the rail industry to install Positive Train Control technology on their trains and tracks, but the industry pushed back arguing that the technology was expensive and complicated to install. The first deadline of the end of 2015 was extended by Congress until the end of 2018. Even by the end of 2018, however, most railroads didn’t have the technology fully implemented.

Railroads were allowed to request an additional extension, though they had to meet certain requirements for it to be approved. Only four rail systems met the deadline fully, the rest have asked for or already received extensions.

Included in the list of preventable accidents was the 2018 crash in which an Amtrak passenger train diverted onto the wrong track and crashed into a freight train. Two Amtrak employees died, and 116 people were injured. The NTSB said Positive Train Control could have prevented that crash.

In Dec. 2017, an Amtrak train derailed during its first trip carrying paying passengers along a new route from Seattle to Portland. The train was going approximately 50 mph over the speed limit when it derailed, killing three people and injuring more than 100, including motorists on the road beneath the track. Although the track had been through millions of dollars of improvements, Positive Train Control was not activated.

The Bottom Line

The issue is not just one of autonomous technology failing, although that is a massive concern. The issue is also how companies handle their new technology, how much safety testing they put it through and how they train the people using it to use it. The general public relies on companies like Boeing and Tesla to adequately test their product and provide accurate data regarding its safety.

The public also relies on regulatory agencies to do their job in ensuring autonomous technology is safe for use and available. If not, lives could be lost either due to delays in implementing technology or because the technology has been rushed into use without adequate testing. Finally, the agencies responsible for public safety must be transparent, both in the work they do and the data they receive, so that the public can make their own informed decisions about what technology they will use.

Anything less is a breach of the public’s trust and unnecessarily puts lives at risk.

Sources:

https://www.cnn.com/2019/02/04/politics/ntsb-positive-train-control/index.html

https://www.cnn.com/2018/02/04/us/amtrak-south-carolina-crash/index.html

https://www.cnn.com/2017/12/19/us/amtrak-derailment-washington/index.html

https://syncedreview.com/2019/03/14/boeing-737-max-crashes-raise-public-distrust-of-autonomous-systems/

https://www.seattletimes.com/business/boeing-aerospace/failed-certification-faa-missed-safety-issues-in-the-737-max-system-implicated-in-the-lion-air-crash/

http://fortune.com/2017/08/07/pilotless-planes-survey/

http://quality-control.us/nhtsa_autopilot_safety_claims.html

http://www.safetyresearch.net/Library/NHTSA_Autosteer_Safety_Claim.pdf

https://qz.com/1095562/most-americans-think-self-driving-cars-are-inevitable-but-fewer-than-half-would-ride-in-one/

https://driving.ca/tesla/auto-news/news/this-tesla-owner-tested-his-cars-autopilot-auto-braking-on-his-soon-to-be-ex-wife

https://www.azcentral.com/story/news/local/tempe/2019/03/17/uber-crash-death-who-blame-tempe-arizona-rafaela-vasquez-elaine-herzberg/3157481002/

--

--

Wisner Baum

Appreciative of new technology advancements but keeping a vigilant eye on corporate shortcuts that put profits over consumer safety.