AI in the Sky
Augmenting Human Performance in the Age of Automation
Last year, more than eleven million flights pushed back from airports across Europe. With 1.9 billion travellers poised to take to European skies by 2037 (more than double today’s figures), both airlines and passengers are feeling their patience for delays wearing thin.
Seeing the need back in 2004, European Authorities launched a massive initiative to modernize their airspace. However, spurred by swelling delay figures, a recent study has indicated that higher levels of automation may be needed to accommodate the rising complexity of air traffic.
The director of Eurocontrol’s Network Manager has called for a “game-changer.” And few technologies have changed the game the way that Artificial Intelligence has.
But while automation is not new to aviation, the application of AI to air traffic management (ATM) is generally overhyped and often misunderstood.
The Promise of AI
Alan Turing pioneered machine reasoning in the 1940s. By the ’50s, Arthur Samuel had created computer programs that could play checkers. A decade later, his creation could outmaneuver the best human players. The ’90s saw Deep Blue beat Gary Kasparov in chess. Then, AI crossed a chasm when a human master of the ancient board game, Go, was dethroned by a computer.
In this light, Ray Kurzweil’s prediction that the difference between an AI and an intelligent human being will soon be undetectable, seems plausible.
Yet there have been some missteps.
Facebook’s head of AI, Jerome Pesenti, has admitted that deep learning and AI have some flaws. He says that AI operates more “on the level of pattern matching than robust semantic understanding.”
At NeurIPS 2019 in Vancouver, the most prominent voices in AI appeared to temper expectations. Yoshua Bengio, credited with helping to start the deep learning revolution, noted that machines still ‘learn’ “in a very narrow way.” They require much more data than humans to learn a task and are not immune to dumb mistakes.
Garbage In
An essential element in creating AI fidelity is the quality and quantity of data used to inform the algorithm. Thus, when AI comes up against an unfamiliar environment, it struggles to mimic human behaviour.
One reason appears to be AI’s inability to capture and interpret data the way a human brain does. Part of the issue is that our minds remain a mystery, even to neuroscientists. To clear this hurdle, programmers would traditionally write code that encompassed every logical possibility the machine could face, and how to react.
Deep Learning, which involves webs of data processing functions called ‘neural nets,’ has become the modern-day response. Rather than manually code rules to encompass a vast repository of data, machines figure out how to extract rules for themselves — connections form within the network, allowing the AI to interpret future data.
However, for AI to achieve its ultimate potential would require a breakthrough in Embodied AI — where an AI can interact with the physical world the way a human does.
The Human Factor
Malcolm Gladwell opens Blink with a story of an art dealer who sold a marble statue to the J. Paul Getty Museum in California in 1983. After a fourteen-month investigation, the Getty concluded that the figure was a kouros dating back to the sixth century BC.
Several experts, however, disagreed.
George Despinis, head of the Acropolis Museum in Athens, reported that he felt an “intuitive repulsion” as he examined the statue, which was indeed a fake.
On the other hand, researchers only needed to slightly tweak the image of a panda for a neural network to declare with 99% confidence that the picture was, in fact, that of a gibbon.
Gladwell argues that humans have thrived in part because of a mechanism within the brain, called the adaptive unconscious, that allows us to make quick judgements based on scarce information. What is more, humans can sense levels of detail which lie outside the definable and capturable data that machine learning needs to make decisions.
Obviously, AI should not replace art critics.
Putting on the Headset
The information processed by an air traffic controller is similarly nuanced. Managing an ATC sector involves assessing workload, pilot experience, weather, and anticipated traffic demand. A seasoned controller will subconsciously react to the slightest tinge of concern in a pilot’s voice. A new pilot will be issued more basic instructions than a wily veteran. A local pilot may be sent direct an approach a waypoint that a foreign-registered corporate jet would have to spend valuable time and attention looking up.
In a feature for Vanity Fair, William Langewische praises the technology on the Airbus 320, which allowed Chelsea Sullenberger to land an aircraft with two inoperative engines on New York’s Hudson River.
Following a catastrophic bird strike, the air traffic controller suggested a return to LaGuardia Runway 13, as it was the closest available landing surface. Sullenberger answered, “We’re unable. We may end up in the Hudson.”
As Langewische notes, Sullenberger could have, in theory, glided to LGA, but the approach would have been too close to call.
There are myriad judgements and calculations required to assess and carry out a forced off-runway landing. New York’s geography offers very few choices. In an inspired decision, the call to go for the Hudson was Sully’s alone.
That said, the engineers at Airbus dealt Sully a favourable hand. The modern autopilot flies with more precision than a human, especially on the A320.
“After Sullenberger took the sidestick… the flying he did was a joint venture with multiple on-board computers responding to him….”
As he banked left, the aircraft calculated the speed at which the plane would cover the maximum distance without engine power — and depicted a visual target on the primary flight display. The augmented control baked into the Airbus enables the aircraft to hold whatever pitch attitude the pilot desires — allowing Sully to focus on what he did best.
Compare this with the tale of the Boeing 737 MAX.
In the process of adopting a classic airframe to next-generation technology, the size and location of the MAX’s engines disrupted the aircraft’s centre of gravity, increasing the risk of stall during takeoff.
In response, Boeing deployed a software solution it called “the Maneuvering Characteristics Augmentation System” (MCAS). MCAS would push the nose of the aircraft forward, initiating a stall recovery protocol if the pilot pulled back too hastily on the controls.
During two now-infamous examples, MCAS unexpectedly took the controls low to the ground, leaving the pilots to wrestle with an uncooperative aircraft pitching its nose below the horizon.
If it ain’t broke…
One of the hallmarks of commercial aviation has been its resilience. The old guard of pilots and controllers were brought up with relatively little assistance from technology. This broad exposure allowed the adoption of various skills through experience and practice.
David Woods, Professor of Cognitive Systems Engineering and Human Systems Integration at Ohio State University, has written that organizations tend to rest on the laurels of past low-incident numbers, squeezing more throughput from fewer resources. Chasing efficiency, engineers run the risk of automating for the sake of automation, streamlining processes, putting the robustness of the system at risk.
According to Britain’s Royal Aeronautical Society, the push for automation is often “accompanied by a relatively shallow general understanding of what the risks associated with that adoption might be.”
However, automation will run until it encounters, what Woods calls, a boundary condition. Then due to brittleness, a cascade of difficulties can ensue, quickly overwhelming the system.
Humans are left to absorb the shocks of equipment malfunctions, diversions, inclement weather, and unexpected occurrences, which are commonplace.
Nevertheless, just as pilots have had to evolve to manage a digital flight deck, controllers will need to adapt to data-driven, augmented, and automated methods of air traffic management.
Automatic for the People
Global air traffic surveillance means it is now possible to see aircraft, anywhere on earth, from the top down. Modern transponders and surveillance technologies can directly stream on-board aircraft data into ATM platforms.
While this enrichening data set is ripe for enabling automation, the International Federation of Air Traffic Control Associations (IFATCA) believes that air traffic management will always include the irreplaceable contribution of the human operator. It calls for a “Joint Human-Machine System” — a cooperative paradigm where technology and the human are interdependent.
What would be the equivalent of a fly-by-wire Airbus in the toolbelt of an air traffic controller?
AI could improve ATC performance by managing “shallow work.”
Cal Newport defines shallow work as “non-cognitively demanding, logistical-style tasks, often performed while distracted. These efforts tend not to create much new value in the world and are easy to replicate.” The human, therefore, would be more actively engaged in “deep work” or “professional activities performed in a state of distraction-free concentration that push your cognitive capabilities to their limit.”
Mihaly Csikszentmihalyi, the author of “Flow: The Psychology of Optimal Experience,” explores the “flow” state in which individuals are completely engaged with a task.
Once a controller executes a decision, they are invested in the outcome and have a mental model of what should happen next. It’s called having the picture.
Keeping the picture and staying in the flow state would be more seamless if AI handled the bulk of the shallow work.
The Augmented Controller
Professor Annette Kluge believes that future work will involve cyber-physical systems. The “augmented operator” becomes an evaluative, decisive employee, who receives support from technical assistance systems and can cooperate seamlessly with robots. According to Kluge, it is not a question of AI displaying all existing data but of establishing connections between these data.
UK NATS is exploring work with “Intelligent Assistants,” which provide support for pilots and controllers to help speed up decision-making and improve on the quality of choices.
AIMEE, an AI-supported assistant developed by Ottawa’s Searidge Technologies, uses ultra-high definition cameras and digital processing software to identify when an aircraft has exited a runway, providing augmented and enhanced situational awareness to controllers in reduced-visibility scenarios.
If a machine can feed the essential data to the human for decision-making that human intuition might not necessarily pick up on, the augmented controller can become more efficient while applying uniquely human abilities. Not only should AI automate redundant tasks, allowing controllers to focus on their highest-value contributions, but the human can also assess more data and make intuitive decisions with refined information.
For example, AI augmentation could take the form of real-time wind and airspeed information, so a controller knows precisely the speed displayed in the cockpit without having to infer it from the groundspeed. It could also give the controller an idea of whether turning into the wind will slow down or speed up an aircraft during vector sequencing.
Automation has made aviation safer. Yet, it has made unusual situations even more unpredictable. The catastrophic failure of automation can happen when the AI processes input outside its normal range. There is yet to be a reliable way to sanitize data in real-time, which would prevent AI malfunction during abnormal events, which are commonplace in ATC. At times like these, human intervention is critical.
The game is indeed changing. As we move forward, the most critical questions will not centre on how many aircraft an AI can control, but rather how humans best interact with technology in their specialized environments.