Coincidental Correlation

William L. Weaver
TL;DR Innovation
Published in
5 min readFeb 16, 2018

Upgrading the Human-Machine Interface

In late March 1979, during a routine maintenance and refueling cycle of the Three-Mile Island (TMI) nuclear power plant’s Unit-1 reactor, plant engineers used a high-pressure mixture of air and water to break up resin clogging a transfer line that removed dissolved minerals in feedwater pumped from the Susquehanna river in south-central Pennsylvania. Unknown to the engineers, a faulty valve leaked water into the air-controlled valve-actuation system of TMI’s Unit-2 reactor, which at the time was operating at 97 percent capacity. At 36 seconds after 4:00 am on the morning of Wednesday, March 28, the Unit-2 feedwater pumps, affected by the water in the control lines, “tripped off,” shutting down the flow of water to its steam generators. Instantly, three emergency feedwater pumps (EFPs) energized, but the feedwater supply was not restored. Within two seconds, the automatic safety systems detected the loss of water and shut down the steam turbine and the electric generator it powered.

Photo by Mingo123 on Pixabay

With the sudden loss of water-to-steam conversion, the temperature of the reactor and its coolant increased causing a spike in coolant pressure. As designed, a pilot-operated relief valve (PORV) opened to allow steam and water to flow out of the coolant system into an overflow containment tank. Eight seconds after the loss of feedwater, the reactor “scrammed” and its control rods automatically dropped into the reactor core and shut down its nuclear fission. In less than a second, the heat generated by the reactor dropped to 6 percent of that released during fission, but an amount that still required active cooling. Fourteen seconds after the main feedwater pumps tripped, an operator in Unit-2’s control room noted the EFPs were running, but he failed to notice two status lights indicating a valve was closed in each of the two emergency feedwater lines preventing any water from reaching the steam generators. Two days earlier, both valves had been closed during a routine test and were not reopened. One of the status lights was hidden under a yellow preventive maintenance tag.

Thirteen seconds into the accident the cooling system pressure returned to a safe level and the power to the normally-closed PORV was shut off and the valve’s status light was extinguished in the control room. Likely due to the water in the control lines, the valve actually stuck open and continued to drain much needed cooling water. Soon, a cascade of over 100 alarms assaulted the two control room operators as they scrambled to comprehend the situation and decide what manual control adjustments were dictated by their training received during simulated emergencies. They first turned on a pump to replace coolant lost during the normal PORV event. They monitored the pressure and level in a pressurizer tank used to control the amount of coolant in the reactor and were satisfied their actions were having the desired effect as the pressurizer level and pressure increased.

Coincidentally, the reason for the water level and pressure increase in the pressurizer was at that time the remaining feed water in the generators boiled dry, resulting in a rapid increase in coolant temperature that expanded water back into the pressurizer. Responding to the actual low-level of coolant, two high pressure injection (HPI) pumps turned on automatically to supply 1,000 gallons per minute (gpm) to a coolant system that was loosing water through the stuck PORV at a rate of over 225 gpm. Not wishing to completely fill the system with water, a state known as a “solid system,” the operators interpreted the rising pressure and level of the pressurizer to mean there was too much coolant and manually throttled back the HPI pumps to less than 100 gpm. Five and a half minutes into the accident, steam bubbles began forming in the reactor coolant system, displacing more water into the pressurizer. Trained to avoid a solid system, the operators began to actively drain cooling water through the letdown system.

At 4:15 am, a rupture disk on the overflow containment tank burst and began to spill radioactive water onto the floor and into the sump system. The sump pump alarms reported six feet of water before they were manually shut down at 4:39. It was not until 6:22 am that operators began to suspect the alarms and status lights were indicative of a loss of coolant accident (LOCA), closed a backup valve to the PORV, and increased the flow of the HCI pumps. By that time, two-thirds of the 12-foot high core had been uncovered by coolant and it took another four hours to submerge the heavily damaged core in coolant.

Researchers Denis Besnard, David Greathead and Gordon Baxter at the University of Newcastle upon Tyne and the University of York have addressed the issue of operator misunderstanding in a recent article appearing in the International Journal of Human-Computer Studies. They suggest operator training be expanded to include an analysis of cognitive psychology, decision-making and human error such that operators can evaluate their own interpretation of critical events and realize mental models can be wrong in the face of supporting information. Secondly, the authors call for increased development of embedded smart software agents designed with an awareness of the human operators so they can present appropriate context-sensitive alarms and information in support of critical decisions and act also as barriers when operators attempt to perform erroneous actions. This requires the current separate operating modes of “autopilot” and “manual control” to evolve into a hybrid, cooperative system in which human and machine are aware and support each other’s decisions with the goal of mutual longevity.

Early afternoon of March 28, 1979, my 7th-grade class at the Donegal Jr. High School, located 10 miles from TMI, was asked to move our desks away from the windows and pull down the blinds. I don’t know what mental model the Emergency Response Team was using but I highly doubt it included a parameter for window-blind-permeability to radioactive iodine.

This material originally appeared as a Contributed Editorial in Scientific Computing and Instrumentation 21:6 May 2004, pg. 16.

William L. Weaver is an Associate Professor in the Department of Integrated Science, Business, and Technology at La Salle University in Philadelphia, PA USA. He holds a B.S. Degree with Double Majors in Chemistry and Physics and earned his Ph.D. in Analytical Chemistry with expertise in Ultrafast LASER Spectroscopy. He teaches, writes, and speaks on the application of Systems Thinking to the development of New Products and Innovation.

--

--

William L. Weaver
TL;DR Innovation

Explorer. Scouting the Adjacent Possible. Associate Professor of Integrated Science, Business, and Technology La Salle University, Philadelphia, PA, USA