How HBO’s Chernobyl reminds me of working in tech

Preeya Phadnis
Characteristic Impedance
14 min readAug 20, 2019

“Belief shifts. People start out believing in the god and end up believing in the structure.” — Terry Pratchett, Small Gods

When I first started watching HBO’s Chernobyl, it was, ironically, for emotional escapism. As a depressed and disillusioned American tech worker, I wanted to watch other people ruining the world for once. It didn’t hurt that the miniseries has drawn high praise on every aspect of its production, from the acting to the writing to the nearly perfect historical accuracy of the costume and set design. Even Nobel laureate Svetlana Alexievich, whose book Voices of Chernobyl was the source of many harrowing personal stories in the series, has endorsed it.

But Chernobyl’s ambitions take it far beyond a shallow indictment of Soviet-style communism. It’s fundamentally about how human systems hold on to certain ideas long past their usefulness, by accruing perverse incentives that benefit a few at the expense of the many. This is exactly the same dynamic playing out in the tech industry today.

The parallels between tech, the USSR, and other human institutions are many and deep, and stem from the same single source: the belief that there is such a thing as an infallible system that always leads to good outcomes. Throughout this post, I’ll use clips from Chernobyl, The Inventor (HBO’s documentary about Theranos), and Fyre to illustrate this point. But the problem is not limited to the USSR, Theranos, or Billy MacFarland. After explaining the parallels between these specific examples, I’ll dive into how this is one of the fundamental dilemmas of human nature, especially as we move to solve ever more complicated problems.

“Around the Godde there forms a Shelle of prayers and Ceremonies and Buildings and Priestes and Authority, until at Last the Godde Dies. And this may notte be noticed.” — Terry Pratchett, Small Gods

Parallel #1: Outlier templates

Every dysfunctional human system starts with a story of extraordinary achievement. In the USSR, it was Lenin and the Soviet revolution. In tech, it’s Steve Jobs. Because Jobs knew he had excellent product sense and a flair for marketing, he famously toed the line between inflexibility and abuse, which has resulted in tech reinterpreting those qualities as evidence of genius (in white men). Of course, this gets the causation exactly the wrong way around. No one would ever have put up with Jobs without constant proof that his business acumen was worth it. But people have forgotten that key context with the passage of time.

The Inventor: Theranos chief creative officer Patrick O’Neill explaining why he trusted Elizabeth Holmes.

The rush to generalize behaviors that worked in a specific context, at a specific time, with specific people, has caused untold systemic problems within tech. For example, I’ve watched many tech managers research their positions by reading blog posts, books, advice columns, and so on — instead of simply listening to their team and putting effort into dealing with their problems. In fact, in my experience, they mostly use the advice to explain to their reports why they won’t take feedback seriously.

On some level, everyone in tech knows that the system and its founding stories are inadequate. They see the toll this philosophy takes on those without organizational power, and they do genuinely feel bad about it. But they also believe that there are similarly-managed tech startups succeeding all around them, and so they refuse to conclude that the problem is with the overall system. (It’s worth noting that the belief that other companies are similarly managed or successful is almost always based on buzzword-filled conversations or generic news, not on actual knowledge.) In this way, tech leaders box themselves into the only remaining option, one that saves their feelings without changing anything:

Parallel #2: Lies

I won’t go into too much detail about the public lies that everyone already knows about, like the extent and usage of collected data, wild exaggerations of the utility of new products, and so on. The bottom line is that tech leaders feel comfortable with these falsehoods because tech’s internal culture is saturated with lies. Here Ariely mentions lying to investors, and in fact it’s common for tech startups to keep three sets of goal numbers (referred to as OKRs): one for employees, one for the board, and one for potential investors. Everyone involved knows this, and the dysfunction engendered is not dissimilar to the USSR’s internal numbers vs propaganda numbers.

In tech we would say, “They gave the board the investor OKRs. Layoffs were never not going to happen.” Speaking: Stellan Skarsgard as Boris Shcherbina in Chernobyl.

Throughout The Inventor, people refer to Elizabeth Holmes, the founder of Theranos, as believing that she was doing the same thing as the startups around her, with the implication that she wasn’t. But she was. The inflated expectations, hacked-together or faked demos, the pathological optimism — that’s tech in a nutshell. I don’t think there’s a single company in the Valley that hasn’t faked a demo at some point, or wildly exaggerated their internal capabilities to the point of outright lies. (Most machine-learning-based startups aren’t.) What people don’t realize is that you can only get away with this to the extent that your business relies on code, which allows you to literally rewrite the reality of your company. The Theranos/Fyre/Chernobyl mistake was applying this philosophy to situations where reality wasn’t so easily manipulated.

Chernobyl: the reactor operators lied about having run adequate safety tests.

Everyone in tech pays lip service to the idea that failure is possible. But too many offset this by, on some level, buying into the idea that their particular system is infallible. They won’t phrase it that way, instead saying things like “Well, it works at other companies,” without knowledge of those internal contexts, or that the founder/CEO has led previous companies to success, leaving implicit the assumption that that success is repeatable. But good outcomes only heighten the delusion of infallibility.

The best employees and leaders are self-aware enough to understand the lie and take it into account when making decisions. Their mental labor makes the system powerful, and over time, it becomes easier and easier to believe the fantasy, especially since those who believe it argue less and are therefore more likely to be promoted. But over time, even necessary lies irritate the mind. They itch at our souls. The life they bring us begins to feel worthless, and the systems they prop up degenerate.

What does that look like?

Parallel #3: Toxic hierarchy

Anxious people want control, and in a large, degenerate system, no one is more anxious than the people at the top. If leaders also believe that their system is infallible, the only way to deal with their doubts is to dig in their heels and categorically refuse the mental effort of evaluating whether they’re using their power well. They know there’s more they should be doing, and they spend every minute of every day choosing not to do it.

Chernobyl: Jared Harris as Valery Legasov, trying to manage up after the reactor explosion.

Interestingly, tech used to pride itself on not having this problem. At the beginning of my career, the trendy startups of the day had done away with the whole idea of management. Of course, this resulted in the tyranny of structurelessness. Pretending that humans don’t form hierarchies is another form of self-delusion, and even worse, the implicit hierarchies that everyone busily denied were tied to outcomes like compensation. The upshot is that managers are now considered necessary, but most people in those positions have never actually experienced good management. So the old problems of hierarchy are reasserting themselves.

The Inventor: Erika Cheung, a lab associate, on her Theranos experience.

Whether the people in them realize it or not, toxic hierarchies often trace back to military-style management. The context that doesn’t make it across is that the military is also the only human institution where awareness of toxic hierarchy is (in the best case) culturally ingrained.

Chernobyl: managerial responsibility in measuring post-explosion radiation levels.

Although I’ve never served in the military, I suspect that this culture comes partially from the kind of work soldiers do: discrete, high-stakes missions with well-defined risks and rewards. Civilian leaders need to direct ongoing and often ambiguously low-stakes processes, which make the problems with the system feel simultaneously larger and less worth fixing.

Chernobyl: Con O’Neill as Viktor Bryukhanov, the power plant manager, explaining why the infamous safety test couldn’t be run as planned during the day.

Another form of toxic hierarchy is discrimination against socially powerless groups, which both tech and the USSR are famous for. At its root, this discrimination isn’t about the specifics of religion, gender, race, sexual orientation, or what have you. It is, instead, about access to power. In a toxic hierarchy, might makes right, so someone without any kind of social backing by definition must not be worth listening to. This is amplified by cultural differences between majority and “minority” groups, which require mental effort to understand and act on. An anxious, mentally inactive leader will interpret this as a sign that the issue is insignificant, and reassert their power.

Chernobyl: Paul Ritter as Anatoly Dyatlov, the deputy chief engineer whose orders caused the explosion.
The Inventor: Ryan Wistort, a technical designer, on his experience at Theranos.

The original scenes of most of these gifs depict harsh verbal abuse, including insults, name-calling, yelling, and so forth. But the core of abusive management is the absolute refusal to even consider putting forth mental effort. Insults aren’t necessary for a manager to refuse anything except total agreement, preferably phrased to fit managing-up requirements. In my experience, the idea that loud verbal assaults are the only mark of toxicity worsens management overall. A manager who refuses to experience emotions is often a manager who uses their power to calmly impose perfectionism on their reports.

Chernobyl: Adrian Rawlins as chief engineer Nikolai Fomin, who will only accept that the reactor exploded if someone below him can explain it in just the right way.

Abusive management starts at the top, with CEOs who spend more time fundraising than making critical business decisions. (Or, sometimes, who just refuse to make any decisions at all.) The next level of leaders must then make those decisions as a committee, and defend them by managing up. This leaves them no time for their jobs, which imposes the same dynamic on their reports, and so on down the org hierarchy. Soon everyone in the company is consumed with making up for the deficiencies of the people above them. This leads to:

Parallel #4: Impatience and corner-cutting

Chernobyl: Sam Troughton as Akimov, the unit shift chief at the reactor, trying to avert disaster.

To be sure, some amount of corner-cutting can be necessary to move forward. A good leader correctly judges where to draw that line. Abusive managers backed by the delusion of an infallible system use it as an excuse to do as they like. And if you cut too many corners, you end up with a circular peg for a square-shaped organizational hole, which leads to even greater delays, aka excuses for further corner-cutting.

Chernobyl: Jared Harris as Valery Legasov explaining the next step of Dyatlov’s intimidation tactics. Turns out control rods are very important to reactor safety.

When this happens systematically for low-stakes processes, the outcome is unworkable operations.

The Inventor: Tyler Shultz and Erika Cheung describe working with the Edison machine at Theranos.

A surprisingly well-kept secret of tech companies is how many of the basic operations are carried out manually. I don’t just mean the publicized examples of content moderation or other machine-learning scandals. Manual execution is used for all sorts of fundamental processes, often in lieu of extremely simple automation. It’s baffling — you’d think any organization run by engineers would understand that automation is the fundamental value add of software. But most engineering managers drastically under-prioritize operational automation, preferring instead to focus on new features or code standards. These are both important, but the focus on them is another artifact of tech’s pure code past. A company with major human operations must first get those right, meaning quickly repeatable with a very low error rate and the minimum of manual intervention. In my experience, this is the most difficult perspective shift to ask of engineers, eng managers, and PMs.

Chernobyl: Jamie Sives as Sitnikov reporting to power plant leadership in the wake of the explosion.

Finally, many have already pointed out a key similarity between the USSR and modern tech: overpromising and underdelivering on products. This is a natural consequence of unworkable operations, corner-cutting, and toxic hierarchy. Everyone is so busy dealing with the system that there’s not much energy left to make good product decisions.

What happens when that failure becomes apparent?

Parallel #5: Total dysfunction

Chernobyl: Dyatlov and Fomin making their initial explosion report to plant manager Bryukhanov.

The above gif is as close to tech-startup dysfunction as makes no difference. Notice how Dyatlov and Fomin both try to shift blame onto each other through the use of precise names and titles, and how Fomin agrees that the hydrogen tank ignited, with absolutely no independent assessment of the situation. When he says it’s the only logical explanation, it’s not only because he’s unaware of the RBMK reactor design flaw. He’s also agreeing that the tank explosion is the most suitable story for an early-morning meeting with an irate committee. In other words, it must be the right explanation simply because it’s the easiest one to say out loud.

In my experience, this reasoning is also a well-tested defensive posture. There’s very little accountability for people (well, for men) who worsen situations with an initially wrong assessment, as long as everyone feels that that wrong assessment was “logical.” (In tech this is summarized by the famous saying, “No one was ever fired for recommending Microsoft.”) Crises end up being treated as litmus tests to prove that everyone is working through the same thought process. This results in pathological secrecy, especially when anxiety over competition is factored in.

Chernobyl: Shcherbina and Legasov realizing that the Soviet state kept the critical reactor design flaw secret.
The Inventor: Douglas Matje, a biochemist at Theranos, on the paranoia he observed.

And once anxious, mentally inactive leaders realize how many secrets there are, they become even more fearful.

Chernobyl: Legasov and Shcherbina realizing the KGB is watching them.
The Inventor: Tony Nugent, a manufacturing engineer at Theranos, on the culture of constant surveillance.

Most startups will tell you that they’re absolutely not siloed, that they truly understand the utility of everyone being able to touch different parts of the business. But getting value from that approach requires better management than most tech leadership is capable of. So even if they’re committed to matrix management or full transparency, tech managers often end up finding new ways to silo people. For example, I once worked on a team where all the women had the same complaint about the team leadership. This took a year to figure out, because the managers told each woman that her problems were unique and that discussing them with the rest of the team would be unprofessional and pointless. Although we weren’t officially siloed, it became clear that we had been unofficially kept apart to make the managers’ lives easier, in much the same way that Theranos kept its teams apart to make its leaders’ lives easier.

The last aspect of this dysfunction is a pathological normalization of failure. After all, if everyone thinks the same way and the system is too big to change, then there’s literally no way to collectively learn from failure. As long as ideological orthodoxy is maintained, failures are written off as flukes, or excused as simply inevitable, but are never, ever a reason to change approach, unless that change is forced.

Chernobyl: Shcherbina on the many failed attempts at the safety test that triggered the reactor explosion, but not, say, a voluntary shutdown and safety retrofit.

At this point, the system is no longer capable of success.

Me, writing this post under my real name.

Why do we want to believe in infallible systems? In large part, because it’s simply more work to be part of a system that one knows to be fallible. It requires much more self-awareness and willingness to engage with odd, new ideas. Most people don’t want to make that effort.

But I think there’s an even deeper reason. We all desperately want to believe the greatest lie of all: that there is such a thing as absolute human progress. Western societies expend enormous amounts of energy teaching their children a linear narrative of civilization, where each new development represents some kind of forward motion. Think about the kinds of history taught in K-12 schools — the curricula almost always begin with the earliest point from which it’s possible (with much revision) to present a linear development to the modern day. We don’t like to believe that regression is possible. We like even less the idea that almost all social change is an illusion.

Yet the hope that we can make true the progressive lie, that we can improve on ourselves with ourselves, is the driving force behind every major human phenomenon we now call an achievement. When we discover certain forms of the lie that hold their own against reality, such as penicillin, or competitive markets, or the ideal (if not always the execution) of democracy, we record both the bare knowledge and its historical context, to teach future generations to better judge where to draw the line between possibility and futility. The hope is that, over time, humanity’s aim will improve.

Because, of course, all we are ever doing is aiming in the darkness. The paradox of progress is that we must believe the lie, but in order to make it true, we must believe it knowing that it is a lie. And once you know that lies are necessary to progress, it is nearly impossible to limit your belief to only the right lies for the situation. The greater our delusion, the greater our capacity for both wild success and catastrophic failure — and since the success requires delusion, the failure is guaranteed.

Does this mean that all human enterprises are hopeless? In one sense, yes. There’s no fixing apathy, dominance, or bias towards conformity, the fundamental psychological forces behind systemic failures. After all, the number of human institutions that have survived since the beginning of history, let alone continued to function as intended, is precisely zero.

But if humanity’s flaws are here to stay, then so are our virtues. We have an innate drive to work together to improve the world around us however we can. What we must do is abandon the idea that it is possible for any of our improvements to be absolute. Just as there is no such thing as the best programming language, or the best human language for that matter, there is also no such thing as the best management technique, or the best company, governmental, or economic structure. There is only what works for the people you have, in the culture you have, with the knowledge of why certain ideas haven’t worked in the past, and the willingness to see the present for what it is and adapt accordingly.

To be concrete: for a system to remain functional and capable of success, those in power must see it as their responsibility to adapt to serve the powerless. This is the responsibility that causes the powerful think that maybe, if they hire more ML engineers, or randomize standup order, or implement quarterly cultural surveys, the right choices will magically be made for their team, at no risk to their own personal reputation. Tech’s idea that this is possible is based not in reality, but in the chance success of a few mid-’00s companies who happened to be the first to strike the right audience at the right time.

But there is no surefire, tried-and-true way to make good decisions. Success comes only when we are willing to believe that our choices (or lack thereof) have consequences, and when we are willing to put in the effort to ensure that those consequences are the right ones. We must take each other seriously enough to change our minds and behavior without being forced, and to face the resulting unknown with courage, resilience, and honor. Without this, we are doomed. With it, we stand a chance.

--

--