An Important Lesson in AI-Driven Decision Making from Alan Turing
Sometimes artificial intelligence (“AI”) discussions place so much emphasis on the data and algorithms, it’s easy to underestimate the amount of work and complexity that remain to convert insights to value. On New Year’s Day I had the opportunity to watch The Imitation Game, a terrific movie about the famous British mathematician, Alan Turing and how he and his colleagues created a digital machine capable of decoding the Nazi’s encrypted messages.
There is a scene when Turing and his colleagues realize their machine has successfully cracked the Nazi code that illustrates the importance of human reasoning and a balanced perspective to act on machine-generated insights. The team’s initial reaction is euphoric as their discovery has a chance to not only shorten the war but also prevent an eminent U-Boat attack on a British supply convoy. The team then realizes any hasty actions might alert the Nazis that their encryption algorithm had been cracked. Ultimately, the team worked with British intelligence services to carefully select when and how they would act on deciphered messages. Although these tough decisions meant in some instances in the short-term allied lives would be lost, in the long-run Turing’s machine was credited with shortening the war in Europe by 2 to 3 years and saving 14 to 21 million lives.
AI is evolving faster than organizations and our legal structure have been able to adapt. As AI applications become more pervasive, interactions between humans and machines will range from humans using AI to inform decisions to AI initiating action with minimal human supervision. Collectively, these developments influence millions of decisions each day. At times, there will be clear right and wrong answers. Other times AI may render erroneous results that can have dangerous implications if they go unchecked. More often there will be shades of gray where the answers are not completely right or wrong and where conflicting values arise. Each case comes with its own unique mix of trade-offs, risks, hidden costs, and unforeseen consequences.
There are many examples where AI has yielded insights which can cause big problems such as perpetuating bias and discrimination, performing unauthorized surveillance via smart devices, executing high frequency algorithmic trades, and basing decisions on the likelihood of an individual’s death. Building support for innovative AI-enabled decisions can face significant political, technical, financial and legal hurdles. Implementation of such insights may also involve conflicting values (e.g., fairness, privacy, security, etc.). Failure to properly consider these issues may result in brand damage, financial costs, security threats, employee turnover, and other impacts. Visionary leaders and organizations will need to take a strong stand with an open mind to guide their organizations and society on the best path.
When the future hangs in the balance, it’s important to establish appropriate guiding principles, success measures, and decision-making protocols with proper boundaries. One example of such guiding principles is the Asilomar AI Principles that address research, ethics, values, and other long-term issues. These principles have been endorsed by hundreds of researchers and leaders in academia and industry, including Elon Musk, Ray Kurzweil, and the late Stephen Hawking. Finally, to prepare for the increased use of AI, organizations need engagement by senior leaders and influential stakeholders, sufficient transparency, and a structured decision-making model that provides guidance on how to manage uncertainty and convert AI insights to achieve the optimal outcomes for the organization and society.