AI Safety, Leaking Abstractions and Boeing’s 737 Max 8
--
Present day Air travel is one of the safest modes of travel. Statistics from the US Department of Transportation show that in 2007 and 2016 there were 11 fatalities per trillion miles of commercial air travel. This is in stark contrasts to the 7,864 fatalities per trillion miles of travel on the highway ( You can check the statistics here: fatalities and miles of travel per mode of transport). Incremental improvements to air travel is a marvel of technical innovation. However, when an aircraft accident does occur, we are forced to take notice due to the magnitude of a single event.
Air travel today is at a level of technical maturity that when a plane crashes by accident (i.e not due to man-made causes like terrorism or misfiring of missiles), then it is surprisingly due not to pilot error or physical equipment failure but rather because of a computer error. That is, an aircraft accident is caused by a software bug.
Everyone today is intimately familiar with software bugs. Microsoft blue screen of death and the use of ctrl-alt-delete have been burned into our experiences. Even in better designed operating systems that we find in smartphones, it’s is not uncommon to force a reboot. This is much less common that we often have to look up the procedure, but it does happen nevertheless.
Software is notoriously difficult to make bug free. It is the nature of the beast. This is because, to build bug-free software systems, we need to explicitly list all the scenarios that can go wrong and how, and then test our software for those conditions. Unfortunately, that list tends to be unbounded if our designs don’t restrict the scope of a software’s applicability. In short, software developers are able to manage the unbounded complexity by narrowing the scope of applicability. That is why even the most sophisticated “artificial intelligent” applications work well in the most narrow of areas. It is very easy to get frustrated by the limitations of voice assistants like Alexa. That’s because AI technology has not reached the level of maturity that is required for open-ended general conversation. In short, bug-free depends fundamentally on a narrow scope of application and extensive testing within this narrow scope.