Sitemap
Cleared for Takeoff

Voices, stories and news from the Federal Aviation Administration

Masthead

Just In Time Weather

Using Technology to Help Navigate Fickle Flight Weather

7 min readMay 2, 2025

--

By James Williams, FAA Safety Briefing Magazine

Imagine you are a VFR pilot on a cross-country flight, and you notice that the weather is gradually changing from VFR to IFR. You realize that you need to do something (i.e., decide) either to turn around and return to your departure airport or divert to an alternate airport. In this situation you could really use a tool to aid in your decision-making, but that’s not in the cards. Or is it? Well, welcome to the Pilot Cognitive Assistance Tool (PCAT).

The FAA and the MITRE Corporation, a nonprofit that manages federally funded research and development centers, recently conducted a joint research study on technology that might provide that help in the future. The PCAT is designed to provide cognitive support to pilots, particularly in single-pilot operations. This is accomplished by accessing weather data that could be available during a flight, interpreting it for the pilot, and presenting it on an electronic flight bag (EFB) in a way that allows the pilot to make better-informed decisions. This would aid in reducing pilot cognitive workload.

Magazine Cover

Just in Time

Allowing pilots advanced notification of changing weather conditions to determine if it would improve decision-making by allowing more time to consider the changes was the premise behind a recent research study conducted by the FAA and MITRE using the PCAT. For example, would the pilot divert to an airport with better weather, return to the departure airport, change course, etc., if they had more time to consider the weather information? Also, would a tool that could handle cognitive tasks like comparing runways for crosswind components when the wind didn’t favor one runway or alternate routes in the event of deteriorating weather be beneficial?

Picture of a cockpit.
A photo of the experimental set up.

The study was based on an experiment where pilots flew a series of five scenarios in a simulator configured as a Cessna 172 with a 180-degree visual display, a Garmin G1000-type display, and an EFB system displayed on a tablet. From there, the pilots were split into an experimental group and a control group. Both groups had access to the same information and technology, but the experimental group had the PCAT system providing notifications on the EFB. The PCAT provided visual and auditory notifications of important weather changes on the flight route during each scenario. The control group had the same information available but had to actively search it out. From there, the researchers collected objective and subjective data about the scenarios. The objective data were things like deviations from the intended flight path, response time to changes, and what kind of decisions the pilots made in response to changes. The subjective data included post-scenario and post-experiment surveys to measure the pilot’s perceived mental workload and general information about the pilot’s experience and currency.

Image of electronic flight bag screen.
An Image of the EFB screen without the PCAT notification (left) and with the notification (right).

And the Results Are …

Well, interesting and complicated. Getting subjects for any experiment can be challenging, and it’s worse when you require specific qualifications for your subjects (i.e., people with some kind of training or qualification, like a pilot certificate, rather than any person off the street). I only dipped my toe into this world but ran into that issue with a far simpler and shorter experiment in an environment with a high concentration of pilots to draw from. In this study, a representation of the GA pilot population covering a variety of ages and experience levels was used.

Image of notifications.
Examples of the PCAT notifications.

The study found that both groups made decisions at similar times in each scenario with similar choices (e.g., deviate, land at a new airport, continue flight, pop-up IFR, etc.). The control group took longer to view important information or updates and had a higher number of touchscreen interactions. This makes sense as the experimental group was getting push notifications that resulted in views of information while increasing situational awareness. Both groups felt they had a high level of situational awareness and adequate weather information. The average of the mental workload ratings provided by both groups in the post-scenario questionnaire did not show a clear difference. Mental workload ratings increased with age and a greater number of years as a pilot. However, the mental workload ratings decreased for pilots with a greater number of hours flown in the last 12 months (pilot currency).

Picture of two men looking at screens.
Training devices are excellent tools for testing new technologies without risk to the participants.

So, what about the study’s other parameters, like altitude, pitch, and roll? They didn’t really show a meaningful difference between the groups. It came close at a few points, but close doesn’t count. This is where the challenges I mentioned earlier come in. The sample size for the study was small, with 12 in each group for a total of 24 participants. With an even slightly larger sample, I would bet that you would get more significant results. Also, the scenarios were somewhat simple. If you used more complex scenarios and ramped up the workload, you would likely see an increased value for a system like the PCAT that offloads processing work from the pilot.

Photo of clouds.

So why didn’t researchers just do a bigger, more complex experiment? There’s always a tension between perfect and good enough. More complex scenarios require more time to design and execute. They also introduce more opportunities for design or interpretation errors. Adding more participants extends the amount of time spent collecting and analyzing data. That, in turn, increases costs because you’re probably having to pay someone to run the experiment and crunch the numbers afterward. In an ideal world, you’d use a sample just big enough to prove or disprove your hypothesis. But you can only really estimate what that number might be while designing your experiment. Even with unconstrained resources, you wouldn’t want to automatically use a huge sample size because, with a large enough sample size, you can make trivial differences appear to be significant. So, it’s always a balancing act. But in total there were promising results when comparing PCAT vs. non-PCAT conditions and the study did accomplish an important goal of showing benefits that indicate future development should go ahead.

The researchers recommended larger and longer experiments to explore the value that a tool like the PCAT could provide. As I read the study report, I agreed with that recommendation. This experiment offered more than enough to show the promise of the PCAT. Think back to the hypothetical laid out in the opening of this article. Would using a PCAT tool be beneficial in these circumstances? I think so. It could help address plan continuation bias; “Well, I’ll continue on and see if I can make it.” If something like the PCAT popped up a notification saying your destination or enroute weather is worse than forecast, it would give you a chance to quickly evaluate whether or not to continue, avoiding additional potential risk exposure.

Picture of sunset.

The PCAT could function as a cognitive assistant, giving you updates that let you have greater control over your flight and increase your situational awareness and safety. The intelligent nature of the system means you get advanced notice of changes when they are happening, along with some suggestions to modify your flight in response. But you’re still in charge. You’re actually more situationally aware while looking at secondary screens and systems less. I can’t wait to see what a more complex and larger experiment will reveal. While anything is possible, I feel like higher workload situations are where tools like this will shine.

Learn More

James Williams is FAA Safety Briefing’s associate editor and photo editor. He is also a pilot and ground instructor.

Magazine
This article was originally published in the May/June 2025 issue of FAA Safety Briefing magazine. https://www.faa.gov/safety_briefing

--

--

Cleared for Takeoff
Cleared for Takeoff

Published in Cleared for Takeoff

Voices, stories and news from the Federal Aviation Administration

No responses yet