F-35C Crash into the South China Sea — A Case Study

Jamesmcclaranallen
49 min readMar 19, 2024

--

https://www.youtube.com/watch?v=DqSUvntRzkU

On January 24th, 2022, an F-35C crashed into the ramp of the USS CARL VINSON proceeding to slide across the landing area and falling into the water. The crash occurred in daytime under benign conditions. The aircraft was attempting an “expedited recovery” in which it arrived overhead the ship at a faster than normal speed and broke into the pattern earlier than normal. The pilot attempted though ultimately failed to activate an assisted landing system that links power, pitch, and glide path control. The aircraft rolled out on final high and fast.

https://www.youtube.com/watch?v=bkXiHD9qKzk

Before looking at this crash further, I would like to give a quick review of how the military conducts accident investigations. From this we’ll see that what we have available to us cannot be reflective of “the Navy line” regarding this event. It is all that is available, however.

The military essentially conducts three separate investigations of a serious event. One is the Judge Advocates General Manual (JAGMAN) review which is a legal review for criminal culpability. The JAGMAN has access to all facts though only to its own analysis. There is a one-way check valve or diode from the JAGMAN to the other investigations. This barrier is intended to create safe space to speak such that survivors can freely talk with the Safety Investigation. The Safety Investigation has a goal of reducing or preventing future similar occurrences hence wants to be able to speak with participants freely so as to learn if they recognized any contributing errors on their own parts and so through subsequent analysis the investigation can determine if the crew made errors based on crew provided input. They also want to be able to follow beyond crew and witness input when such may not have made errors yet may be concerned to speak. Obviously the crew wouldn’t be so free to talk were such back fed to the JAGMAN. Hence Safety Investigations and their results are privileged protected information. Safety Investigations are also primary with first right of access to any material after any rescue and reclamation efforts have ceased. The third review has different names depending upon service though for the maritime services, USN, USMC, USCG, it is the Field Naval Aviator Evaluation Board (FNAEB pronounced ‘fee-nab’). The FNAEB is “non-judicial” in the sense it isn’t looking at criminal liability though it will feel like a trial. Its goal is to assess the fitness of crew to continue flying. It seeks to mitigate risk of crew deemed ill suited for mission or found to be risky in flight. Of these, often the Safety Investigation Report (SIR) gets classified Confidential though at the very least is protected information not distributed to the public while the FNAEB is personal privacy information, hence all we can see is the JAGMAN which is partially redacted.

https://s3.amazonaws.com/static.militarytimes.com/assets/pdfs/1677091907.pdf

To me, it seems the recommendations do not match the conclusions which do not match the findings of fact. While we cannot see the results of the other reports, we can see some effects. Per Ward Carroll(1) and confirmed in Navy Times(1), the pilot was removed from flight status while the Fleet has adopted as procedure usage of the assisted landing modes mandatory all the time, and no more expedited recoveries. Mandating the modes was recommended while the JAGMAN went to extra effort in attempt to say the expedited recovery didn’t significantly contribute. It did. The JAGMAN said that nothing should inhibit expedited recoveries yet policy changed barring them. In these, particularly clipping the aviator’s wings and mandating modes, we can see a “knee-jerk” reaction that doesn’t recognize what actually happened therefore leaves the vulnerability alive to hit again. Mandating the modes is a problem.

Though we will get into more details of this crash and problems not adequately addressed, doing so is more because we’re looking at this crash and want to be more thorough in so doing. Why are we looking at this crash? Because ultimately it was a failure of energy management resulting from two of the Landing Signals Officer (LSO) Rules to Live By(2) not being followed. An inadvertent, unintended, unrecognized, and inappropriate swap to front side technique(2) away from power technique(2) amplified these rule violations. As we’ll see, the pilot failed to follow the rules and inadvertently made inputs akin to attempting to “stretch the glide,” and the LSOs also failed to recognize and address the rule violations. This was further compounded by a strict adherence on the part of the pilot not to wave off or go around unless so directed by the LSOs. And by the LSOs being slow to do so. This rule as to who owns wave offs is unique to landing on ships and has legitimate reason though covers a range more broad than it needs to cover. We will also see, however, these newer jets set you up to not follow the Rules to Live By. In doing such, they create expectation that the rules won’t be followed. In other words, this was not pilot error nor LSO error despite both the pilot and the LSOs making errors. This was design induced failure(3).

LSO Rules to Live By:

  • Always Lead the High or Fast
  • Never Lead the Low or Slow
  • If High and Fast, Fix the Fast then the High
  • If Low and Slow, Fix the Low then the Slow
  • Never Re-Center a High Ball In Close But Stop the Rising Ball
  • Fly the Ball All the way to Touchdown

As many of us are now working with technically advanced aircraft (TAA), we should note the contribution of mode confusion. A critical aspect here wasn’t that he failed to use the mode, it is that he strove to be in the mode, presumed he was in the mode, and wasn’t. Demanding mandatory use of the mode will not fix this. It only makes you more vulnerable to when you’re not in the mode. After all, he sought to be in the mode which is what you’re now asking everyone else to do. Instead, you should probably review Children of the Magenta(4). Knowing when and how to reduce to simpler modes is important; recognizing being in a reduced mode and acting accordingly is too. Appreciating advanced modes can do screwy things and being ready to cut out them is also important. Mandating modes flies in the face of these learned lessons.

These are my conclusions based on the report’s facts and are not the report’s conclusions. The report never mentioned the LSO Rules to Live By nor the effect of not adhering to them. It was rather weak in its conclusions in emphasizing mode confusion rather instead penalizing the pilot for not being in the ‘correct’ mode. Yet it did recommend more obvious feedback as to current mode be programmed into the jet. Seems a bit of a mismatch to me too.

The MP [mishap pilot] stated that he did not select DFP because he was working hard to get the airplane slowed to optimum approach AOA, on glideslope, and on centerline.

The MP stated that he used speedbrakes to help get the airplane slowed to approach speed. observing at the 45 position that he was 3–4 balls high on IFLOLS. In an effort to correct for being high, MP made a nose-down correction.” Not fixing the fast then the high and incorrectly using pitch for the high instead of power.

Instead of arriving to the start of the landing attempt with the airplane at optimum approach AOA, the MP believes he was somewhere around 180 knots, with one ball high on the IFLOLS, and on centerline.” Also noting “Optimum approach AOA of 12.3 degrees for an F-35C based on the MA’s [mishap aircraft’s] configuration and fuel onboard would yield an optimum approach speed of 140 knots.

Now I want to pause here a moment to mention the value or lack thereof in personal statements. Memory is fallible. Even six seconds after doing something, should the result not be as expected, your recollection as to why you did it will have changed. Even should it have worked, you may still have a rationalized recall for so doing and that differs from why or how you really did something. Memory is crap. Yet here we are using statements to get a sense of what happened? This is ok as partly in taking multiple statements from multiple sources, we can work around issues to get a sense of them. More importantly, in this case, we have the flight data recorder and ship based videos from which to confirm what happened. We should only take the What from the statements, however, not the Why nor How. For these, we’ll have to do our own digging.

Here is the Commander Seventh Fleet summary included in the report,

This mishap was the result of pilot error. The mishap pilot (MP) attempted an expedited recovery breaking overhead the carrier, an approved and common maneuver, but the MP had never performed this maneuver before and it reduced the amount of time to configure the aircraft and conduct landing checks. As a result of the compressed timeline and the MP’s lack of familiarity with the maneuver, the MP lost situational awareness and failed to complete his landing checklist. Specifically, the MP remained in manual mode when he should have been (and thought he was) in an automated command mode designed to reduce pilot workload during landings.

The report summarizes the crash as essentially being a high fast start rolling out on final while inadvertently leaving the power back:

On 24 January 2022, at 1631L (GMT+08H), an F-35C Lightning II aircraft assigned to VFA-147 suffered a Class A Aviation Mishap as it attempted to land aboard the USS CARL VINSON (CVN 70). The aircraft impacted the CVN 70 ramp forward of the main landing gear, forcing the pilot to eject. The aircraft slid off the forward section of the CVN 70 Landing Area (LA) and into the sea. The aircraft was later recovered on 2 March 2022 from a depth of over 12,000 feet utilizing a remotely operated vehicle embarked on the Diving Support Construction Vessel (DSCV) PICASSO.

The investigation determined the cause of the mishap to be pilot error. The pilot entered the carrier break, bringing the throttle to Flight IDLE, allowing the aircraft to slow to approach speed. Once approach speed was achieved, the F-35C landing checklist was not fully completed by selecting Approach Power Compensation Mode (APC)/Delta Flight Path (DFP), leaving the aircraft operating in Manual Powered Approach (PA) Control Laws (CLAW).

Note no comment made to not pushing the power up; pushing the power up would have been easier in the moment and just as satisfactory as engaging APC. Not only would pushing the power have worked while not having the modes engaged, it also would have worked with the modes engaged as ten pounds pressure on the throttle overrides the modes. Not getting proper desired response to the stick should not have been followed by more attempt at the stick, it should have been followed by immediate power movement. The report not only never mentions this, it recommends making the modes mandatory thus further reducing the likelihood of another pilot correctly pushing power up and increasing the likelihood of said future pilot also incorrectly continuing to pull back on the stick.

During the start, middle, and in-close portions of the landing approach, the pilot applied corrections via stick inputs under the assumption that the aircraft was in either APC or DFP PA CLAW. These corrections did not engage the engine to provide additional thrust as the aircraft was still operating in Manual PA CLAW with the throttle still at Flight IDLE. The aircraft developed a rapid sink rate during the in-close portion of the landing approach and a manual engine power demand was not added until 2.6 seconds prior to impact. This late power addition was insufficient to prevent the aircraft from striking the ramp.”

We’re going to back up to follow this event from before the break into the pattern. Before we do, however, I’d like to take a quick look at risk mitigation categories. When we mitigate risk, we typically seek to eliminate the hazard, substitute the hazard, engineer controls, and if unable to do any of these, provide procedural controls and provide protection. In all of these we seek to reduce likelihood and/or severity of consequence. You’ll notice I did not provide transference. Those that educate in risk will provide transference as a means to reduce risk. I think they are a bunch of idiots. Transferring risk is an accelerator not an inhibitor. Who really minds playing “heads I win, tails you lose?” After throwing transference out, we typically look at providing protection as the weakest mitigator as all it does is marginally reduce severity of consequence. I tend to disagree with this, however, and see procedural control as the weakest mitigator. Assuming procedures will be followed is lunacy. They’re going to be broken and you should expect such. If you don’t account for such, you messed up, not the person failing to adhere to procedure. Procedural control often turns out to be just another form of transference, “heads I win, tails you lose.”

First, we need to look at why an expedited landing, also known as a “shit-hot break,” or “SH break” or “SHB.” Yes, there is an element of max performing an aircraft here when max performance is not needed to task. Yet I don’t fault folks for seeking opportunity to max perform as max performing makes you better at flying and fighting a jet. I’ve heard from one aviator I respect attributing to another aviator and now admiral I respect having once said “BFM and low levels make good pilots.” BFM stands for Basic Fighter Maneuvers aka dogfighting. And I completely agree with this sentiment. As for the mishap pilot, he was actually following a build up to his expedited recovery. He came to the break at four hundred knots, a mere fifty above normal, while true SHBs are in the five fifty to six fifty region. The report notes he was doing a credible respectable build up approach. Though at this point, why a SHB? Seems nothing more than minor airshow. There actually is good reason to try to speed things along during recovery on an aircraft carrier.

The ship does not operate like an airfield. It is not always open. It is not readily and regularly launching and landing aircraft. The ship is actually very limited in flight operations. Conducting flight operations is limiting to the ship’s own maneuverability. Conducting flight operations is limiting to the ship’s defensibility. Flight operations make a ship vulnerable. Ships conducting flight operations need to steam into the wind. Such makes them predictable while potentially has them driving toward threats. Therefore you want to minimize the time needed to conduct flight operations. Ships typically work what we call “cyclic operations.” In these, they “steam” downwind or away while not conducting flight operations, then turn into the wind, then launch a wave of aircraft, and finally recover the previous wave before turning downwind again. It is possible to conduct simultaneous launch and recovery operations, but to do so the ship cannot have many aircraft on its deck. Thus it is better to pulse power to gain more total power than to try to work continuous operations. Time between landings is one of the metrics used in certifying a ship and respective air wing as ready to deploy; the sole purpose of this is to minimize cycle time. Expedited recoveries can help minimize cycle time. Though personally I don’t think they save that much time while adding risk of lengthening cycles as mistakes typically lead to wave offs and/or bolters. A bolter is when an aircraft fails to catch a wire thus is forced to go around. It is an immediate rejected landing. Wave offs and bolters mean you’ve gained an extra lap in your cycle. As aircraft typically stack above the carrier before recovery, the expedited recovery doesn’t save much time, only the first aircraft can do it while all the others slide up merely by that first aircraft’s savings. Yet the maneuver adds risk of adding the extra lap. Doesn’t buy you much. Instead a much smaller set of situations are the ones that would benefit by such recovery. Should a singular aircraft be recovering and the deck already be ready, time gets saved. Imagine someone is late for recovery yet the ship is keeping the recovery window open. This person is not beholden to the stack as the stack has already landed. This is the person who creates benefit with the maneuver. The further away the person is, the more the benefit. Imagine being a hundred miles out, late for recovery, yet able to push it up to six bills all they way to the break while flying straight to the break. That saves time(5). That potentially keeps the boat out of a missile envelope. Then again, smart boat is going to turn anyway and either you wait to the next recovery or you punch out. SHBs from the marshal stack, they don’t save anything of significance.

While not necessary, the fact that the mishap pilot was doing a build up should alleviate him from skepticism at his decision making for doing a SHB. His execution, however, was poor. Had he not done the maneuver, he would not have been in the situation. The maneuver itself starts deliberately with excessive energy and it left him with excessive energy rolling out on final due to the poor execution. It very much was significantly contributory. It is causal regardless of what the report says.

How did the execution go? Well, he executed a seven g break starting at the stern of the ship. He entered at 400 KIAS. Normally one breaks not earlier than the bow, hence shortening space to dissipate energy, normally one breaks at 350 KIAS hence more energy to dissipate. Normally one pulls power to Idle while he momentarily went to afterburner (AB) and held AB for the first few seconds into the break. Thus he initially sustained rather than bled energy. If doing the SHB, one normally eases the pull to displace laterally then tightens again to continue bleeding and possibly eases the pull to get a little deeper then tightens again. The pilot did ease though not sufficiently. Think of easing angle of bank into the wind while doing a turn around a point. This means that as he dissipated energy, his turn circle got smaller, hence he was tight and being tight means you’re going to be high as you have less longitudinal displacement to slant out the high.

MA flight data shows a 400 KCAS, 7 G break while in maximum afterburner. MP will hold 7 G for five seconds, ease to a 2 G pull for five seconds, pull to 7 G again for four seconds, and then maintain a 2–3 G pull for the next 10 seconds.

But,

The MP explained that he tried avoiding a wide approach pattern by maintaining his pull (keeping G on the airplane) in an attempt to get below 300 KCAS (landing gear extension speed limit). He described pulling the ship to the nose at the 90 and then dropping the landing gear as he targeted an appropriate groove-distance.

You need some extra width to help dissipate energy. Being fast means you need a longer groove as groove distance is really time looking for fifteen to eighteen seconds while being high means you’d want that longer distance to be closer to glide path. Yet he sought the actual distance to which he was accustomed for a normal speed.

From here, we get the aforementioned High Fast start with Rules to Live By violations not leading the fast, not leading the high, not fixing the fast then the high. But there’s more to say in these regards.

We should note that the pilot thought,

The MP explained that he thought the LSO were going to wave off his attempt to land because he was fast at the start to in the middle of the landing attempt.

Naval aviators do not own their own wave offs. This is important as were one to wave off in close to at the ramp, they will still momentarily continue downward before flattening and climbing. While doing this, their hooks dangle below and can snag a wire. We call this an inflight arrestment and such is dangerous. Such slaps jets down and can do so in unexpected ways in unexpected directions. It jeopardizes those working on the flight deck of the ship and risks the aircraft and crew caught. Therefore there is an exclusion window in which an aircraft cannot be waved off. Yet it is nearly impossible for aircrew to judge when they are in this window. Therefore the LSOs own the wave off as they can judge this window.

But, this rule is probably a bit excessive. Were one to wave off at the start and to the middle, they won’t have an inflight engagement. It is only in the nebulous region of inside in the middle through the inclose to at the ramp region where this window lies. The window is smaller than this region though judging where you are as the pilot flying is difficult. You could probably change the rule to be no self wave offs inside in the middle and not incur much more risk toward inflight engagements while alleviating some of the risk of not being able to take your own go around. This pilot thinking he was going to be waved off from the start to in the middle and while high and fast through these could have taken his own wave off in these points while posing no threat of an inflight engagement. Was his inability to do so for sake of a rule contributory? Yes it was.

So here we are, really fast and really high. The LSOs should be able to see both of these. Closure with an object coming to you is hard to see yet you can still see speed based on aspect. The more nose on, the faster the plane, seeing more belly means slower. Remember speed terminology is used but we’re really concerned with AOA and since we’re now flying straight on final, AOA is proportional to pitch. Pitch can be seen from the ground or from the ship. Additionally, as the plane gets closer, there is an indexer light on the nose landing gear strut that is color coded revealing AOA. Red is fast, Green is slow, yellow orange is on-speed. Think about a traffic light’s colors. Altitude is easier to see. LSOs observe hundreds of passes in getting qualified and will see thousands of passes in their careers. They can see if an aircraft is on the four degree visually apparent glide path (though flying three degree due to headwind and/or ship driving forward away from the plane). You can sort of see this yourself. Hold your arm straight out in front of you, bend your wrist such that you’re looking at your palm. Put your pinky on the horizon. The top of the fourth finger is four degrees. Since you’re likely trying this on land, you’ll only want to go three fingers up. Were you able to stand adjacent to the targeted landing point of a runway, most planes arriving should be on or about the top of that third finger.

The LSOs should know their own Rules to Live By and should readily see their violation. They can see someone fixing the high before the fast as aspect doesn’t change while height does. They can see someone not leading the high as there’s no pause in transition. They similarly can see someone not leading the fast. And,

The 1630L recovery was the final recovery of the day. [name redacted] described seeing ‘lots of color on the platform’ throughout the day. By using that term, he explained that there were several wave-offs earlier in the day and that a previous F-35C landing had made the LSOs uncomfortable when it landed approximately 50-feet aft of the 1-wire. Earlier the same afternoon, the LSO team talked about ‘waving defensively’ and making sure that they were not complacent on the LSO platform. The idea of waving defensively was explained as communicating with pilots earlier in the landing pattern, not accepting significant deviations from pilots, and waving off an airplane if it looked unsafe.

Fixing the high then the fast allows the pilot to more easily see the total remaining energy difference while it also enables the LSOs to see the more easily seen high as an over-energy condition earlier than they might realize the fast over-energy condition. Not leading the fast and not leading the high would have meant energizing the compressor section of the engine sooner so as not to have it spun down with low inertia and keeps one from flying through down. Because of momentums, failing to lead each means one will go slow and/or go low.

In this event, you might say both the pilot and the LSOs were complacent. They weren’t really complacent, however, they were conditioned. Consider the prioritization of the flight modes and what is required to enter each.

The primary recovery mode for shipboard operations is DFP. The mode should be engaged once the ball is centered and stable approaching the wings level transition.

APC is considered a downgraded mode of DFP with comparable system performance in glideslope control, but requires more frequent pitch stick inputs due to the fact that it commands a flight path rate (VSI) vice a glideslope.

When DFP or APC modes are available, using Manual mode is considered a degraded mode. This is due to the increased workload associated with controlling approach airspeed, glideslope and lineup.

So the primary mode requires you to be on glideslope to engage. It is the primary mode desired at time of accident and now mandated. This is a forcing function to incorrectly fix the high before the fast and to not lead the high. It both drives the pilots to do such and creates expectation amongst the LSOs for such.

Then add the method to correct in both the assisted modes is completely opposite that which you would do in manual or in any other airplane. If you’re losing energy, pulling back on the stick is generally a bad idea. Yet that is what these modes require. Having inputs 180 out between modes is the definition of bad design. Such may seem intuitive when engaged in the assisted modes but such sets up exactly this accident.

the glideslope correction technique in DFP is to push/pull pitch stick until the desired apparent flight path rate is achieved. This will correct for glideslope deviations by increasing or decreasing VSI. The pilot must hold the stick correction until a centered meatball is observed, and then allow the stick to neutralize in order for DFP to recapture the LRP [landing reference point] glideslope setting.

As the jet approached the optimum approach AOA (and 140 knots), the MP attempted to add power by increasing aft stick input.

Manual mode requires the use of both the stick and throttle for coordinated flight by the pilot. The stick will control aircraft pitch and roll while the throttle will add and subtract thrust demand from the engine.

Using the modes makes the overall body of passes cleaner with less variance yet the outliers are significantly worse. Reducing variance from within the set of normal acceptable tolerable passes does not positively correlate with reducing large deviance events. Per Sidney Dekker(6), it actually does the opposite, it has a negative correlation.

counting and trying to control failure only gets you so far… we celebrate the absence of negatives as if any of that has any predictive capacity for things going spectacularly wrong this is a grand illusion… people particularly in operations love counting error free days and it predicts nothing and it only sponsors the hiding the evidence of things going wrong… it creates a culture of risk secrecy and cultures of risk secrecy, cultures of risk secrecy quickly become dumb cultures, they don’t allow the boss to hear bad news they don’t allow themselves to learn… its this misguided idea about the broken windows theory… when we obsess on reducing negatives we make ourselves deeply vulnerable to the big bang… the airline with the highest incident rate actually has the lowest passenger mortality rate

I’d like to pause a moment to ponder the ever improving precision of advanced weaponry. CEP vs gross miss distance. Prior to the advent of guidance systems, weapons dispersed in normal distributions. Accuracy was assessed in terms of Circular Error Probability (CEP) and/or CE90. CEP equals CE50 and means half of all hits would fall within the defined range from the target point. CE90 means 90% of all hits will. Those outside these numbers? They’d be dispersed about the target but generally not excessively so. It is possible to have gross misses, perhaps a bent fin on a bomb caused it to tumble. With the addition of high drag devices meant to allow the attacker to get closer and be more precise, we’d see gross misses due to failure of the high drag device to deploy. Alternately we’d get failure if delivery were planned in low drag mode yet the device inadvertently deployed. Computers in delivery aircraft came along to aid delivery making more precision though gross misses could occur due to display limitation, mismatch in settings or altitude data to the computer, and other errors small they might seem as input created larger gross misses in those deliveries. Guidance meant to improve precision in the weapons themselves both succeeded in vastly improving precision and in creating more means of and worse gross misses. The gross misses might be rather small in number yet their severity due to outsize collateral concern outside normal collateral estimates grew significantly. Excessive errors fall on a long tail distribution with their impacts and significance much more unpredictable.(7)

‘Gross Miss’ implies an impact that misses the DPI by an excessive distance, i.e. outside of what would be expected considering the characteristics of that weapon. It is the result of some guidance anomaly, hardware failure, or catastrophic malfunction during the weapon time-of-fall (TOF) in which the weapon does not guide correctly. The miss distance could be in the target vicinity (i.e. inside 1000 ft) or miles from the intended target, depending on how or when the event occurred. For operational delivery purposes, the likelihood of a ‘gross miss’ is more important than whatever uncontrollable factors caused it. Regardless of the cause, the impact is further than what would be expected considering a valid delivery and the normal characteristics of a correctly functioning weapon. This dispersion of ‘malfunctioning’ weapons should be treated as a separate probability distribution from the dispersion of correctly ‘functioning’ weapons. The gross miss probability along with associated miss distances provides information for weapon guidance reliability, and training range safety footprints.

This mirrors how technically advanced aircraft (TAA) function. Per the Federal Aviation Administration (FAA), we see fewer accidents in TAA yet a higher percentage of these being fatal. The technology better handles variance until it hits a limit. After the limit, the technology is off to the races while a human in manual control could better mitigate and dampen. We shouldn’t expect any different with the F-35 assisted throttle modes.

Selecting the afterburner (AB) may have reduced immediate thrust in favor of more total thrust just a bit later. Yet this may have led to less thrust in the moments for which adding thrust was critical.

1631:27L: CAG Paddles gives a ‘Powe.. Waveoff, Waveoff, Burner, Burner, Burner.. Burner’ call. The word ‘power’ is not completed by CAG Paddles before starting the ‘Waveoff…’ sequence of calls. 1631:31.4L: MA impacts the ramp

The MP realized that the jet was extremely underpowered as the jet became slow and continued to descend (settle). At this moment, MP manually pushed the throttle to military power and then went to maximum afterburner once he realized that the airplane was in a perilous state, failing to climb.

MA flight data shows that maximum afterburner was selected approximately 2.6 seconds prior to impact. There was a 3–4 knot increase in KCAS before impacting the ramp.

You could also view this as an inadvertent fixing the slow before the low. This would be another Rule to Live By violation though in that moment with AOA rapidly increasing, I’d forgive this as impending stall was quickly becoming issue. The LSO rule does not account for a rapidly slowing slow versus low. It assumes generally steady slow and low.

When it comes to thrust, military services will refer to “military” power or “mil” as full throttle minus AB. “Maximum” or “max” power is full throttle full AB. There’s a range of AB after getting to full throttle till getting to max. Working idle to mil, the exhaust nozzle varies in most modern fighters. We refer to this as the variable exhaust nozzle (VEN) which closes as power is increased such as to act like a thumb over a water hose. AB works by injecting fuel into the exhaust. As a result, the VEN needs to be open so as not to get torched. So, selecting max means the turbine and compressor need to spool up and the VEN needs to open before you get the thrust. Selecting mil takes a bit of time but selecting max takes even longer. In the case of the Hornet at low altitudes, idle to max takes about three seconds to kick. I don’t know how long the F-35 takes, but likely idle to mil would have been more responsive than idle to max.

The report considered targeting two wire “not contributory” yet a difference of two feet up or sliding twenty feet forward may well have been the difference between a ramp strike and a hook strike. It could potentially even have been a ‘taxi one’ in which case we’d never even be discussing this. This is what we would call a confluence of events or bad luck. It was a reasonable decision for ship maintenance to spread wear and tear though it cut into the margin for safe aircraft recovery. Given the calm conditions of the day, it was a good day to do so in this recovery, but adding the expedited recovery training to this without knowledge of this gives a double cut in margins. So, how can we say this is due to pilot error? Add to this the direction and immediate cost of going to afterburner instead of military power.

MA impacts the ramp of CVN 70 just forward of the MA main landing gear, shearing the main landing gear, bouncing the tail of the MA into the air with a left-wing-down, nose-down component.

Pause for a moment and consider The Atlantic with America is Trapped in a Pandemic Spiral(8).

The spiral begins when people forget that controlling the pandemic means doing many things at once. The virus can spread before symptoms appear, and does so most easily through five P’s: people in prolonged, poorly ventilated, protection-free proximity. To stop that spread, this country could use measures that other nations did, to great effect: close nonessential businesses and spaces that allow crowds to congregate indoors; improve ventilation; encourage mask use; test widely to identify contagious people; trace their contacts; help them isolate themselves; and provide a social safety net so that people can protect others without sacrificing their livelihood. None of these other nations did everything, but all did enough things right — and did them simultaneously. By contrast, the U.S. engaged in A Serial Monogamy of Solutions.

“As often happens, people sought easy technological fixes for complex societal problems… Other strategies have merit, but are wrongly dismissed for being imperfect… This brief attention span is understandable. Adherents of the scientific method are trained to isolate and change one variable at a time. Academics are walled off into different disciplines that rarely connect. Journalists constantly look for new stories, shifting attention to the next great idea. These factors prime the public to view solutions in isolation, which means imperfections become conflated with uselessness… A world of black and white is easier to handle than one awash with grays. But false dichotomies are dangerous.”

Personal Blame Over Systemic Fixes… Moralistic thinking jeopardizes

The Normality Trap

Magical Thinking

Just as there is no singular solution, there is no singular cause. Many factors interacted to create this event.

It is the opinion of this board that pilot error was the cause of the mishap.

Rubbish! You could say pilot error was a cause but not correctly call it the cause. You could say the pilot made many errors yet as a set these were not the cause. You could say the pilot and LSOs made many errors, still not the cause. The cause if looking for a singular cause needs to be a set of all the factors. This includes the jet driving pilots to errors while normalizing these errors as expectations with the LSOs. It includes modes requiring opposite inputs for similar actions. It includes things that seem innocuous like targeting the two wire. It includes other factors within the jet like time for spool up and for VEN opening. And others we’ll see in a moment.

First, let’s look at Sidney Dekker again this time with Todd Conklin (you could also look to Bob Edwards with Andrea Baker for similar approaches).

Per Sidney Dekker in The Field Guide to Understanding ‘Human Error’ third edition(9),

Notice that ‘human error’ is in quotation marks. The quotation marks should have been there all along. ‘Human error,’ after all, is no more than a label. It is a judgment. It is an attribution that we make, after the fact, about the behavior of other people, or about our own.

One of the first studies into ‘human error,’ in 1947, already put the label in quotation marks. Paul Fitts and Richard Jones, building on pioneering work by people like Alphonse Chapanis, wanted to get a better understanding of how the design and location of cockpit controls influenced the kinds of errors that pilots made. Using recorded interviews and written reports, they built up a corpus of accounts of errors in using aircraft controls. The question asked of pilots from the Air Material Command, the Air Training Command, and the Army Air Force Institute of Technology, as well as former pilots, was this: ‘Describe in detail an error in the operation of a cockpit control, flight control, engine control, toggle switch, selector switch, trim tab, etc. which was made by yourself or by another person whom you were watching at the time’. Practically all Army Air Force pilots, they found, regardless of experience and skill, reported that they sometimes made errors in using cockpit controls.

Fitts and Jones called their paper Analysis of factors contributing to 460 ‘pilot-error’ experiences in operating aircraft controls. ‘Pilot error’ was in quotation marks — denoting the researchers’ suspicion of the term. This insight has since been replicated many times. The attribution of ‘human error’ depends on the perspective you take. What is a ‘human error’ to some people, is normal behavior to others,

This is how their paper opened: ‘It should be possible to eliminate a large proportion of so-called “pilot-error” accidents by designing equipment in accordance with human requirements.’ ‘Pilot error’ was again put in quotation marks, and once it had been investigated properly, the solutions to it became rather obvious. The point was not the ‘pilot error.’ That was just the starting point. The remedy did not lie in telling pilots not to make errors. Rather, Fitts and Jones argued, we should change the tools, fix the environment in which we make people work, and by that we can eliminate the errors of people who deal with those tools. Skill and experience, after all, had little influence on ‘error’ rates: getting people trained better or disciplined better would not have much impact. Rather change the environment, and you change the behavior that goes on inside of it.

Several articles have been written regarding the B-17 switches. We’ll look at a few starting with Wired. How the Dumb Design of a WWII Plane Led to the Macintosh(3),

For all the triumph of America’s new planes and tanks during World War II, a silent reaper stalked the battlefield: accidental deaths and mysterious crashes that no amount of training ever seemed to fix.

If a plane crashed, the prevailing assumption was: That person should not have been flying the plane. Or perhaps they should have simply been better trained. It was their fault.

Fitts pored over the Air Force’s crash data, he realized that if “accident prone” pilots really were the cause, there would be randomness in what went wrong in the cockpit. These kinds of people would get hung on anything they operated. It was in their nature to take risks, to let their minds wander while landing a plane. But Fitts didn’t see noise; he saw a pattern.

“the pilots of B-17s who came in for smooth landings and yet somehow never deployed their landing gear… the Air Force reported an astounding 457 crashes just like the one in which our imaginary pilot hit the runway thinking everything was fine.

The reason why all those pilots were crashing when their B-17s were easing into a landing was that the flaps and landing gear controls looked exactly the same. The pilots were simply reaching for the landing gear, thinking they were ready to land. And instead, they were pulling the wing flaps, slowing their descent, and driving their planes into the ground with the landing gear still tucked in. Chapanis came up with an ingenious solution: He created a system of distinctively shaped knobs and levers that made it easy to distinguish all the controls of the plane merely by feel, so that there’s no chance of confusion even if you’re flying in the dark.

Instead of ‘pilot error,’ he saw what he called, for the first time, ‘designer error.’

BRND Studio gives us a photo of the switches in question in the B-17 via Medium in The Flying Fortress’ Fatal Flaw(10),

And they have this to say about it,

Chapanis went on to pioneer ‘Shape Coding’, a system that ensured all knobs and levers were different shapes and sizes, redesigning the cockpit and ensuring there was little to no room for confusion for pilots reaching for their controls. No similar incidents took place after this adjustment.

Steven Shorrock writing in Humanistic Systems also looks at this B-17 case being illustrative of “design error” in Human Factors and Ergonomics: Looking Back to Look Forward(11),

Chapanis noticed that the flaps and landing gear had identical switches that were co-located and were operated in sequence. In the high-workload period of landing, pilots frequently retracted the gear instead of the flaps. This hardly ever occurred to pilots of other aircraft types. Chapanis fixed a small rubber wheel to the landing gear lever and a small wedge-shape to the flap lever. This kind of ‘pilot error’ almost completely disappeared.

Fitts and Jones took a different slant altogether. The basis for their study was the hypothesis that “a great many accidents result directly from the manner in which equipment is designed and where it is placed in the cockpit.” What had been called ‘pilot error’ was actually a mismatch between characteristics of the designed world and characteristics of human beings, and between work-as-imagined and work-as-done.

There’s a flip side to finding lots of errors ultimately systemically caused yet blamed upon those acting hence the systemic issue never gets resolved. This is that there may not be many errors apparent at all while the systemic issue lays unseen and dormant till it gets triggered by a confluence. Hidden vulnerabilities. Here’s Dekker, this time in Drift into Failure(12),

For example, isn’t there a relationship between the number of occupational accidents (people burning themselves, falling off stairs, not securing loads, and so on) and having an organizational accident? Isn’t it true that having a lot of occupational accidents points to something like a weak safety culture, which ultimately could help produce larger system accidents as well? Not necessarily, because it depends on how you describe the occupational accidents. If accidents are emergent properties, then the accident-proneness of the organization cannot be reduced to the accident-proneness of the people who make up the organization (again, if that is the model you want to use for explaining workplace accidents). In other words, you don’t need a large number of accident-prone people in order to suffer an organizational accident. The accident-proneness of individual employees fails to predict or explain system-level accidents. You can suffer an organizational accident in an organization where people themselves have no little accidents or incidents, in which everything looks normal, and everybody is abiding by their rules.

Consider the JAGMAN again,

The MP was a previous Top-5 Nugget and a Top-10 ball-flyer within CVW-2, indicating that his landing performance at the ship had been exceptional for a first-tour junior officer.

Back to Dekker in The Field Guide to Understanding ‘Human Error,’

I have even been on investigations where the causes that people picked were finely tuned to the people they knew were going to be on the board approving the report and its recommendations. Causes had more to do with political doability of what the investigators wanted changed in the organizations, than with the data in the sequence of events.

There is no ‘root’ cause

So what is the cause of the accident? This question is just as bizarre as asking what the cause is of not having an accident. There is no single cause — neither for failure, nor for success. In order to push a well-defended system over the edge (or make it work safely), a large number of contributory factors are necessary and only jointly sufficient.

What you call ‘root cause’ is simply the place where you stop looking any further.

Todd Conklin in Pre-Accident Investigations(13) notes that blaming pilots is really a Fundamental Attribution Error. He shows that more often than not, it is the system(s) setting up the failure. Imagine the commonly accepted Swiss cheese model for accident causality, risk, and mitigation. Now imagine funnels shooting you into the holes. Imagine fluid pushing against the various layers of cheese flushing you through the funnels through the holes. Blaming pilots resolves none of the holes, it only adds more pressure to the fluid flushing others subsequently through.

Errors are how we are wired, how we are made, a natural part of being human. Human error is inevitable — all workers are error-making machines. What all this means is pretty simple: error is everywhere, and there is nothing you can do to avoid the errors. You can’t punish error away. You can’t reward error away. Error is an unintentional, unpredictable event.

Error is always attributed in retrospect to the worker by the organization after some type of consequence happens to the organization.

It is easy to find errors in retrospect. It is even easier to judge these errors as wrong in retrospect. This process is called a fundamental attribution error.

& Dekker in the Field Guide,

‘Loss of situation awareness’ is no more than the difference between what you know now, and what other people knew back then. And then you call it their loss.

You are not making an effort to go into the tunnel and understand why it made sense for them to look at what they were looking at. That you now know what was important means little and it explains nothing. In fact, it really interferes with your understanding of why it made sense for people to do what they did.

‘Loss of situation awareness’ is actually based on seventeenth-century ideas about the workings of consciousness. What philosophers and scientists at the time believed was that the mind was like a mirror: a mirror of the world outside. Knowledge of the world is based on correspondence. What is in the mind, or awareness, needs to correspond to the world. If it doesn’t correspond, then that knowledge is imperfect… you basically say to an operator after the fact that you now know the real state of the world, that you know the ground truth, but that she or he evidently did not. ‘Loss of situation awareness’ is a judgment you make about the correspondence to your ground truth and their understanding. And if that correspondence is not perfect, then you call it their ‘loss of situation awareness’ or their inaccurate situation awareness. The biggest problem in this is obviously that you get to decide what is the ground truth. Indeed, what is the ground truth? The data that you now know were important? But you know that with knowledge of outcome, in hindsight. If the operators had known the outcome, they too would have considered that data important. The point is, they did not know the outcome.

The same problems apply to complacency. Complacency is a huge term, often used to supposedly explain people’s lack of attention to something, their gradual sensitization to risk, their non-compliance, their lack of mistrust, their laziness, their ‘fat-dumb-and-happiness,’ their lack of chronic unease. Something that explains so much usually explains very little.

in a complex, dynamic world, people will miss things. This happens because there is always more to pay attention to, to look at, to consider, then [sic] there is time or are cognitive resources to do so. To point out, in hindsight, that people missed certain things, only shows how much you are in the position of retrospective outsider. They were looking at all kinds of things in ways that have developed to be optimal over time. That you don’t understand that shows that you have not done enough to put yourself in their shoes, to understand how the world was unfolding inside the tunnel, without knowledge of outcome.

Back to peculiarities of the F-35,

the F-35C lands at an optimum approach AOA of 12.3 degrees. This ‘on-speed AOA’ of 12.3 degrees keeps the arresting hook at the appropriate angle for an arrested landing. APC is available in PA mode and maintains optimum approach AOA when selected. When an airplane is heavier, 12.3 degrees AOA will correspond to a faster approach speed. As the weight of the airplane gets lighter, the actual approach speed of the aircraft is reduced. Although the weight of an aircraft is variable, 12.3 degrees will always put the aircraft at the same approach attitude.

Optimum approach AOA of 12.3 degrees for an F-35C based on the MA’s configuration and fuel on board would yield an optimum approach speed of 140 knots.

As the jet approached the optimum approach AOA (and 140 knots), the MP attempted to add power by increasing aft stick input.

During the MA’s landing attempt, the MA continued decelerating to a speed of ‘approximately 120 KCAS and increased AOA to 16 degrees before striking the ramp at 123.5 KCAS and 21 degrees AOA.

Note the AOA increased despite the speed acceleration; this could indicate the pilot wrongly pulling on the stick while adding power in the go around wave off as increasing pitch rate would explain how AOA could go up while speed went up. Though such would be another error, it is perfectly reasonable to expect the pilot to wrongly pull up while fearing striking the ramp. An alternate cause could have been the Integrated Destruction of Lift Control (IDLC) programming of the jet. It could be that both contributed though certainly IDLC did. The only way to know if there was another error here would be to see the stick force data at that time and such is not mentioned in the report. Note this is a jet therefore there is no prop wash and no trim stall effect.

Recall that typically naval aircraft use lots of drag so as to need lots of thrust in landing configurations. This is to keep the turbines and compressors spooled up by forcing a neutral power point well above idle. The F-35, however, uses a different solution to the spool up time and engine responsiveness problem. It programs IDLC such that control surfaces move with throttle movement. Advance the throttle and flaps will adjust as will fly-by-wire enabled symmetric ailerons creating more instant lift thus getting the climb response expected with power addition. Then, as the engine gets up to speed, these extra control deflections get slowly reduced returning to their neutral position. Similarly, if one needed to reduce power, ailerons would symmetrically displace up reducing the wing lift. Not sure how the flaps handle this, reduction would give a temporary drop yet reduces drag so the bleed rate would decrease too, which is not desired. But with throttle being reduced, reduced fuel quickly generates engine response even if turbines and compressors have not yet slowed.

In PA mode, Integrated Direct Lift Control (IDLC) provides an immediate flight response to pilot commands by briefly commanding the flaps and ailerons up or down to generate a short term rise or sink. This additional deflection blends back to zero as thrust responds to the commanded engine thrust (ETR). IDLC is an intrinsic feature of the PA CLAW and cannot be deselected.

Here’s the kicker with such a system. It will need to be optimized to select AOAs. As the airplane lands at 12.3 degrees, it is only logical that this would be the optimization point. You see, there is only so much deflection controls can make in accommodation while the engine spool up time will generally remain the same given similar density altitudes involved and similar amounts of air ramming the intake. At high AOA, the controls are going to deflect to “help” but they’ll blend away too quickly.

Fast and high conditions are most challenging in Manual mode in clean wing configuration due to the prolonged time with throttle at or near flight IDLE.

And these are the conditions in which our pilot started. It was a setup with insufficient IDLC and insufficient capacity to adequately reduce power and insufficient drag which, when crossing the energy state from over energy to under energy triggered a trap. IDLC then proved insufficient at excessively high AOA. IDLC is a clever solution but not a good solution for the engine spool up and slow responsiveness problem. Slow bleed rates contribute to excessive bleed time setting up the wound down engine and the eventual falling off the energy cliff. Add to this opening VENs due to AB selection…

The IDLC at higher than normal AOA makes me think about two stories to try to relate. One involves the T-45 while flying a precautionary approach. Should one find oneself on final from “in the middle” to “at the ramp” yet it appear that one was going low, perhaps should this be aiming too close to the threshold or even the wrong side of the threshold, one can drop the flaps so as to “throw” oneself forward. It works pretty well. Now, I had a pilot ask me how that works considering the overall increased energy depletion and extra drag with flaps. He asked, why not just pitch up a bit? There’s presumption here that due to the increased kinetic of the precautionary, we actually have adequate energy. Chances are, if you pitched up, you’d over-control it and round out to flare too high with a hard drop at the end. Alternately, you could over-control it and zoom a bit setting up an excessive AOA at a high position. Using the flaps to throw it enabled keeping the nose down till you hit a normal round out and flare. This gets back to the energy question, to which at the time I didn’t have the proper answer coming to mind to give. The answer is that instead of trading energy kinetic for potential as you would via pitch, you trade kinetic for some potential while sacrificing some away to entropy. This does mean you could have stayed clean and rounded out and flared with a longer floating flare — almost like a T-38 landing — though that is quite uncomfortable in Meridian, MS where trees reach up not that far from the runway. And, yes, due to the steep pitch in glide, it really does throw you forward not just up. For an idea of the energy available to sacrifice and still have some to “balloon” in the throw, a T-45 flies a 17 unit AOAref for normal landings which at 13k pounds correlates to 124 KIAS full flaps, 143 KIAS half flaps, and 164 no flaps while the precautionary approach is flown at 175 KIAS from low key abeam slowing to 165 KIAS for base key at the ninety slowing to 160 and selecting full flaps when runway is made determined from the forty-five to rolling out on final. Were one low, one might delay this while sticking with the 175 KIAS which in turn gets them to the throwing position. Note the normal landings have no round out and flare as they’re navy jets while the precautionary approaches do. Gear speed and flap speed are both 200 KIAS.

Similar to the T-45 though with much less AOA margin, I was recently instructing in a Cirrus S22T doing a power off 180 (PO180). I’ll note with the Cirrus, I consider such a performance maneuver not a practice emergency as you have Cirrus aircraft parachute system (CAPS) and are under the Cirrus stipulated CAPS immediate altitude should you actually find yourself in a PO180 situation. Anyway, I demonstrated one then my transition client attempted one. He was doing well with his best glide speed though while relatively full of fuel, with two male adults in the plane, and at an AOB tightening to the runway as we had an overshooting wind, the AOA was at two o’clock working toward one-thirty while it didn’t look like we were going to make it to lineup. I directed a go around. Had he pulled more to make it, we would have stalled. Bit of a difference than the T-45 case. Yet as he was at “best glide,” we had a pretty good margin to Vref full flaps. For the Cirrus, 80 KIAS is full flaps Vref, 85 for half flaps, 90 for no flaps. You can subtract a knot for every hundred pounds below max gross though that is a Mike Goulian rule of thumb not a Cirrus adjustment. Best glide is 92 KIAS which Cirrus says is for all weights, rubbish, you could apply the same 1 kt per 100 pounds below max gross, there is no AOAbg published. Note we were in an AOB pulling so Vref and Vbg create hazards. AOAref 3 o’clock, however, is just fine. We saw 2 o’clock. Had we wanted to, as we knew we would otherwise make the runway, we could have dropped the flaps and “thrown” ourselves up and forward through more of the turn. You’re familiar with this as the balloon you experience lowering the flaps. Such elevates you essentially in a zoom trading speed away gaining altitude but it is a dirty zoom.

For the Cirrus client, I not only mentioned the option to use the flaps as we needed pitch rate without pulling into the AOA margin yet we knew we had more than enough potential energy to make the runway longitudinally, lateral was our problem, I also mentioned as no one was on the parallel taxiway A, were he doing this for real, landing on taxiway A so as not to need to tighten the turn would be perfectly acceptable.

Why did I think about the T-45 precautionary approach and this Cirrus event? Because these seem like they’re probably good parallels to F-35 IDLC to me. Only difference is with IDLC you’ll have the thrust catching up while the T-45 and Cirrus examples do not. But should the thrust be slower than expected, and/or you get less zoom in the dirty zoom… With IDLC, even should the blending be based upon engine rpm or thrust output, you still don’t get the same gains at high AOA as with IDLC at target AOA — less zoom. And you get less power responsiveness at high AOA if starting back at idle(14). Normally at high AOA, you’d be up on power to counter induced drag, but not in the over energy to low energy situation. You get less mini zoom though you get all the dirt plus some hurting you post mini zoom as you’ll have the induced drag of the high AOA temporarily uncountered with thrust while you’ll also had the parasitic for having surfaces dangling.

With IDLC, Lockheed really did have a clever solution to a problem. Unfortunately, this solution didn’t just transfer a risk, it created one and transferred it. Should problems arise due to IDLC, the pilot will be the one held to account despite it being the fault of the jet.

With the “speed brake,” we see a similar situation creating risk that is transferred. Mitigation is then attempted via procedural control. Yet remember procedure is weak as a control as you know it won’t be strictly adhered to. You know this the moment you write any procedure. Sometimes context will differ, sometimes errors will occur. Procedure highlights system weakness while really transferring that to the user.

There are no speed brakes on the F-35. Yet there is a speed brake function with the F-35. How? Very similar to IDLC. The fly-by-wire aircraft can use existing surfaces with balanced deflections to create extra drag. No need for an additional surface or two with associated extra hydraulic lines. Imagine both rudders displacing inward or both outward, they each cancel the other’s yaw yet add drag. Symmetric ailerons and flaps put more frontal area into the wind. There’s a problem, however, in the way the F-35 mechanized such.

MA flight data analysis states that with the release of the speedbrake switch the trailing edge flaps will retract from the speedbrake position to their nominal PA symmetric position. This movement will result in a loss of lift.

This is a stupid design. The purpose of speed brakes is to lose energy quickly. The purpose of deselecting speed brakes is to stop energy loss. Losing lift with the deselection of speed brakes is the opposite of the desired stopping of energy loss. That’s ok, they solved the issue with procedure,

The use of speedbrake within 10 seconds of touchdown during a Carrier, Fixed Wing Aircraft (CV) recovery is a prohibited maneuver.

Heads I win, tails you lose.

Procedural controls have problems in that they will not always be effective. Sometimes they’re written too broadly or without context such that adhering to procedures actually produces more risks. Even when they seem appropriate, however, they’re still not solid. Procedures won’t be be adhered to. They’re going to be missed. If you count on procedure to save your bacon, well, you’re getting cooked. Anyone who designs needs to account for the fact that procedure is going to fail. They need overlap with other measures and/or margin to accommodate that the procedure will be violated. Not ensuring such is the fault of the designer not the operator. Yet somehow wrongly we still hold the user responsible. With this, procedure becomes another name for transference, a “heads I win, tails you lose” risk accelerator.

MA flight data show that MP actuated speedbrakes at 1630:53L until final retraction at 1631:27L, approximately 4.1 seconds prior to impact.

Consider for a moment a small child eating a plate of spaghetti. See the red staining the tablecloth about the plate with noodles strewn around both plate and table. Now imagine a child eating the spaghetti but this time from a bowl. It may contain the spaghetti and prevent the mess. It may still allow some noodles to slop over though less mess. Yet it could also cause the child to more vigorously push at the meatball thus catapulting it across the table with it falling to the floor (and possibly rolling out the door). The dog takes off to snatch the meatball and topples a chair as the other kid sitting in this chair had grabbed a pitcher of juice thus spilling this across table, floor, and dog(15). Complex systems are like this. Now, I would argue the jet itself is a Complicated system but putting pilots in the loop in mission relevant environments, you have Complex systems. Complex systems are non-deterministic. You may think you’re building redundancy into your systems yet you may be introducing more pathways to fail or amplifiers of failure. Consider decades of forest management that sought to stop any fire the moment a fire was known. The result was a buildup of small materials that enabled the bigger trees to catch fire. The result is super-fires(16). Compare the robustness of sea walls and levies with the resilience of flood plains, marshes, and mangroves. The sea walls and levies typically withstand anything up to their designed point, but should something stronger hit, you get catastrophic failure. The flood plains, marshes, and mangroves flood with all strengths, but absorb and drain acting as cushion. Beyond them won’t be slammed in the same manner. Cynefin: Weaving Sense-Making into the Fabric of Our World(17) gives,

There is a historical Chinese legend dating back to 2000 BC, depicting two lords who spent their lives attempting to tame the waters of the Yellow River… Gun spent a decade constructing an elaborate system of dams along the river. Gun’s approach was described as Confucian, emphasizing control, governance and order. However, all the barriers failed as the current of the Yellow River refused to be controlled. Gun’s son, Yu the Great, was assigned to take over the task when he became an adult and adopted a markedly different approach. Instead of dams which seek to control the flow of water by applying a rigid boundary, Yu decided to ‘follow the water as his master… and followed the way of the water.’ He worked with agriculture masters, slept, and ate with common folk and sought to understand the context and landscape. Yu worked with them to construct an intricate system of irrigation canals. This system consisted of many dikes that ran parallel to the Yellow River and relieved flood water into agricultural fields. The overflow fed rice paddies and aquaculture. Working with the natural energy of the river’s current, he solved a flooding problem and created a surplus! Yu’s approach echoes the naturalism of Daoism. A softer and more organic complex adaptive approach: meeting the system where it was, he followed the energy instead of seeking to control its flow.

Consider again The Atlantic this time with The Deadly Myth That Human Error Causes Most Car Crashes(18),

the responsibility for road safety largely falls on the individual sitting behind the wheel, or riding a bike, or crossing the street. American transportation departments, law-enforcement agencies, and news outlets frequently maintain that most crashes — indeed, 94 percent of them, according to the most widely circulated statistic — are solely due to human error. Blaming the bad decisions of road users implies that nobody else could have prevented them. That enables car companies to deflect attention from their decisions to add heft and height to the SUVs and trucks that make up an ever-larger portion of vehicle sales, and it allows traffic engineers to escape scrutiny for dangerous street designs.

Knowing of the fallibility of memory and malleability of memory given subsequently known events, consider the pilot,

The MP did not remember selecting or confirming that he had selected APC for landing.

I become a bit suspicious. I suspect he probably did attempt to engage APC. By this point in his flying, such is an entrained pattern. But I wonder if holding the throttle back while trying to also deploy “speed brakes” may have exceeded the ten pounds throttle movement force that cancels APC and may have prevented it even being activated. Alternately, I’m curious if a different button felt like the APC activation button with a miss-finger due to pulling the throttle to idle and/or working the “speed brakes.” I am highly suspicious that he did not try to activate it, however. His mind obviously presumed that he had activated the mode. Despite my suspicion, we know he did not activate the mode. Seeing the recommendation to make a more pronounced means of showing system activation, I find it rather disingenuous to blame the pilot for failing to accomplish this, however. How can you blame the pilot should you recommend against what must be a system flaw? You wouldn’t recommend a change were it not deemed a flawed system. I did write at the beginning that the recommendations did not match the conclusions. Recommendations did not match conclusions which did not match findings of fact. The report’s conclusion is far too simplistic to believe.

“When DFP or APC modes are available, using Manual mode is considered a degraded mode. This is due to the increased workload associated with controlling approach airspeed, glideslope and lineup. Fast and high conditions are most challenging in Manual mode in clean wing configuration due to the prolonged time with throttle at or near flight IDLE.

Burble turbulence is disturbed air behind the ship due to its superstructure (tower). DFP mode performance through burble turbulence for light through medium headwind (16–40 knots) conditions is a predictable trend of going ½ to 1 ball low at the ramp, which may be easily corrected by a small ¼ aft stick input to re-center the ball. Burble response in APC mode results in a slight settle at the ramp, requiring small aft pitch stick correction followed by a single forward pitch stick correction to arrest the rising ball once it is centered.

Talk about bullshit mode justification. Physics is physics. While the computer may have faster reflexes than the pilot, the computer can’t do anything special to spool up the turbines and compressors any quicker. As for difficulty flying through the burble, humans have been flying jets manually through the burble for seventy years. To suggest humans can’t anticipate and correct for the burble is rubbish.

For those who have not yet picked up on a little theme here regarding accident prevention and somehow still believe removing a link in the chain, any link in the chain prevents accidents, I offer you this chain:

The plane crashed due to a confluence of events including yet not limited to bad design requiring opposite actions between different modes and driving violations of the LSO Rules to Live By.

Even the respectable and proper build-up approach going only fifty knots faster into the break maneuver rather than 150 to 200 kts faster of a typical SHB on his first SHB attempt contributed to this confluence as had the faster speed been used, the plane would have been waved off and no crash would have happened. The LSOs would have readily perceived the excessive over-energy condition. Instead they saw something they were lulled into believing salvageable thus enabling its continuance. This pilot would still be flying had he flown inappropriately more aggressively and thus gotten flushed from the pass. (We punish based on severity of consequence not egregiousness of action. Don’t believe me? Ask what happened to the previous pilot who “showed a little color.”)

Mode ambiguity significantly contributed as did the excessive energy rolling into the groove at the start while VEN opening and AB light off time, “speed brakes” means of function, lack of turbine and compressor spool up due to jet design, IDLC with higher than anticipated AOA for IDLC use, targeting the two, and the rule against taking your own wave off all contributed. Yet, except for the over-energy start and role ambiguity, the report dismissed all of these. It knew of all of them and rejected most. In the case of the rules, it didn’t even mention them, neither Rules to Live By nor LSOs owning of wave off implications.

John Lowery(19) wrote to big three human factors — complacency, over-confidence, compulsion: here we had “complacency” resulting in over-confidence in the system while having the modes labeled degraded driving compulsion to use the “higher” modes even as the simpler manual would have served better in the given situation.

Children of the Magenta,(4) Commanders of the Magenta, Admirals of the Magenta.

(1) https://www.navytimes.com/news/your-navy/2023/02/22/pilot-error-caused-f-35-carrier-crash-and-plunge-into-south-china-sea/

(2) https://medium.com/@jamesmcclaranallen/improve-your-landings-with-aoa-power-techniques-04601584fb3a

(3) https://www.wired.com/story/how-dumb-design-wwii-plane-led-macintosh/

(4) https://www.youtube.com/watch?v=5ESJH1NLMLs

(5) I was standing “tower flower” duty one day when one of the Hornets called back saying he had “fodded” his cockpit dropping a pen. He was part of the latest launch for which the subsequent in cycle recovery was just finishing; the pilot wasn’t due back for another two recoveries. Yet, because he was no longer mission capable, the ship extended the current recovery to catch him. He overflew the ship breaking just shy of the bow in the six hundred knot region, upon which the mini-Boss was immediately chastising me for the idiocy of a fast break with FOD in the cockpit. I was thinking the same thing and I don’t know why he was yelling at me, I didn’t do it, I fully expected a straight-in. But there I was being yelled at. Now straight-ins are much slower than even normal break entries. Much slower. Thinking back on it, I don’t think he fodded his cockpit. I think dinner disagreed with him and he wasn’t going to be able to fly a triple cycle. He didn’t drop his pen, he was worried about dropping something in his pants. But he didn’t want to say so on the radio. Now I’m curious about the gs and g suit. Think I would have pulled power a mile or two before the ship so as to be slower and not have to pull so much in the break were I in such a case. Just think, tube of toothpaste.

(6) https://www.youtube.com/watch?v=9fwJ9xgvu3A

(7) Though I have not seen the final paper, I did read a draft from Jeff Beekman for Characterizing Precision Guided Munitions Delivery Accuracy that gets at exactly this concept.

(8) https://www.theatlantic.com/health/archive/2020/09/pandemic-intuition-nightmare-spiral-winter/616204/

(9) https://www.amazon.com/Field-Guide-Understanding-Human-Error-ebook/dp/B0772TJ6V2/

(10) https://medium.com/swlh/the-flying-fortress-fatal-flaw-694523359eb

(11) https://humanisticsystems.com/2018/02/25/human-factors-and-ergonomics-looking-back-to-look-forward/

(12) https://www.amazon.com/Drift-into-Failure-Components-Understanding-ebook/dp/B01NCHX2DQ/

(13) https://www.amazon.com/Pre-Accident-Investigations-Todd-Conklin/dp/1409447820

(14) chances are your fully authorized digital engine control (FADEC) is going to slow roll up your power to avoid compressor stall if starting at high AOA and idle.

(15) This plate versus bowl concept showing increased robustness and decreased variance generally being better yet more susceptible to the larger failure resonates with Scott Page’s Complexity lectures in which he uses a ball rolling in a sink as analogy while later using “dancing landscapes.” https://www.youtube.com/watch?v=nO-GjcAS1ck&list=PLOxODW9vlVLS9rJzg-ai5eQQ_Ui2m50I9

(16) https://www.wired.com/story/west-coast-california-wildfire-infernos/

(17) https://www.amazon.com/Cynefin-Weaving-Sense-Making-Fabric-World-ebook/dp/B08LZKDCYM

(18) https://www.theatlantic.com/ideas/archive/2021/11/deadly-myth-human-error-causes-most-car-crashes/620808/

(19) https://www.amazon.com/Pilots-Accident-Review-depth-high-profile-ebook/dp/B07MM2FNLS

--

--