In a welcome anticlimax to two years of dire warnings, the new millennium arrived over the weekend without a computer-driven meltdown of the globe’s electronic infrastructure.” So began a New York Times editorial on January 3, 2000, reflecting a collective sigh of relief that immediately followed the clocks turning over from 1999 to 2000.

New Year’s Eve 1999 was supposed to be like no other. In the months leading up to the start of the new millennium, panic rose over a programming error lodged within computers all over the world. The so-called Y2K bug was a problem with internal clocks in computers and software — they only counted years as two digits. And so, nobody was quite sure what would happen when they all went from 99 to 00. Panic rose. What would the computers do, faced with a theoretical year double-zero? Would bank machines work? Would planes fall out of the sky? Would nuclear missiles be accidentally deployed?

The answer to each, as the Times noted three days into the new year, was: no. None of that happened. And in the years since, there are two stories about why. Though the two accounts of what happened appear to be in opposition to one another, together they not only help explain our world nearly 20 years later, but also how we may have missed the real lesson Y2K could have taught us — one about networks, human error, and the danger in putting blind faith in the machines.


In the years since that New Year’s Eve, Y2K has become an enduring punchline. The whole incident is now remembered mostly as a non-issue whose overblown media hype was matched only by the massive amounts of money governments around the world deployed to solve it. That image of Y2K as a non-event persists in the cultural memory, used still to dismiss supposedly looming catastrophes in politics or technology.

But Y2K wasn’t just an over-exaggerated media-fueled mass panic. Behind the scenes, as people hoarded food and water, or joined doomsday cults, programmers worked tirelessly to prevent anything from going wrong. In the months leading up to 2000, there were genuine concerns within the IT world about the Y2K bug, and a subsequent concerted effort to avoid widespread problems when ‘99 switched over to ‘00.

To believe that Y2K amounted to nothing by chance alone — to believe that it was media hype and nothing more — is to “engage in a destructive, disparaging revisionism that mindlessly casts aside the foresight and dedication of an IT community that worked tirelessly for years to fix the problem,” Don Tennant, editor in chief of ComputerWorld, wrote in 2007.

Y2K wasn’t just an over-exaggerated media-fueled mass panic.

But the fact that so many people feel this way amounts to a kind of weird triumph: the evidence for all the work is the absence of disaster. In his 2009 retrospective on Y2K, Farhad Manjoo concluded that the success of Y2K preparations “has bred apathy” — that the lack of Y2K armageddon has made it more difficult for people to heed warnings “about global warming or other threats…the fact that we fixed it may make it harder to fix anything else in the future.”

There might be yet another way to look at it.

The two, combined, narratives of what transpired on Y2K — that it was strictly a non-event, or, that it was a non-event because of programmers were skilled enough to predict and avert it — actually bred something else: confidence.

Whether you believe Y2K was much ado about nothing from the start, or whether you understand that it was only so because of human intervention, the lasting legacy might not be one of apathy, but trust — both in the machines we created, and in our ability to understand and control them. Either the networks and systems we had created to that point were inherently designed to be strong and secure (or even indestructible), or we were readily able to predict and avoid areas of weakness.

Armed with this confidence, in the years since Y2K, we have created more and more complex networks and systems to enhance, guide, or even take over many facets of our daily lives. Whereas in 1999, many aspects of our day-to-day living remained offline, today little is left untouched by computer systems, networks, and code: Talking to friends and family, reading a book, listening to music, buying clothes or food, driving a car, flying from place to place — all of these activities depend on the network. Increasingly, the network extends to devices that, in 1999, were not considered to have much technological potential: household appliances like refrigerators or thermostats.

Now, we’re discovering what a false sense of security we’ve created. Along with it should come the realization of just how little we understand about the programs that permeate our lives and the networks that link them. Unlike 20 years ago, we appear less and less capable of predicting what will go wrong, or of stopping it before it does.


The network of modern tech infrastructure is a vast matrix of algorithms. The term “algorithm” once referred only to the “if A, then B” code that can “learn” by doing, residing at the core of modern intelligent programs. Now, it can mean “any large, complex decision-making system; any means of taking an array of input — of data — and assessing it quickly, according to a set of criteria,” as Andrew Smith explained recently at the Guardian.

Yet, as these decision-making systems have become more complicated — moving from more simple “closed-rule algorithms” to more adept and adaptive machine learning algorithms — the reasoning that guides them has drifted beyond our comprehension. Along the way, we’ve lost control of the outcomes they can generate.

Take the financial markets. Though computer-driven trading was commonplace prior to the 2008 crash, in the decade since, algorithmic-based programs have proliferated throughout the sector.

“The convoluted but broadly explicable financial markets of previous generations have evolved into an electronic jungle of unfathomable complexity,” Robin Wrigglesworth wrote at the Financial Times in 2017. The speed at which markets now operated has “accelerated markedly, and the number of market anomalies, odd trading patterns and mysterious ‘flash crashes’ have increased in tandem with the ascent of algorithms.”

The more computers are asked to do, and the more they “learn” about the world they’re revolutionizing, the less we know about them.

Or consider the way many of us learn about the world.

YouTube’s algorithms have always been mysterious — but there’s growing evidence that they’re downright disruptive. For years, anecdotal evidence has shown that YouTube’s recommendation algorithms show users increasingly extreme material, taking them down information and conspiracy wormholes, exposing them to weird theories about everything from world affairs to whether the world is flat. The algorithm was also accused of offering disturbing children’s programming, auto-created videos generated by keywords that portrayed kids, or beloved childhood characters, in extreme and bizarre scenarios.

But Google, YouTube’s parent company, has been cagey about revealing how these recommendation algorithms work. Finally, last year, Guillaume Chaslot, a former engineer at YouTube, released research he conducted over 18 months that indeed “suggests YouTube systematically amplifies videos that are divisive, sensational and conspiratorial.” The point of YouTube’s algorithm is to drive more people to watch more content on the site. It’s likely that nobody expected the algorithms to decide extreme material would best achieve that end, yet here we are.

Algorithms are also confusing the justice system.

In 2016, ProPublica revealed that a risk-assessment program used by U.S. state courts to make bail and sentencing decisions was actually very bad at assessing who might reoffend or commit a violent crime. It was also racially biased, mislabeling white defendants as low risk more often than black defendants, and was “particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.”

Yet, by the time ProPublica reported how inaccurate the risk-assessment scores could be, the program had already been become “increasingly common in courtrooms across the nation… used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts… to even more fundamental decisions about defendants’ freedom.” In a number of states, for example, the scores were “given to judges during criminal sentencing.”


The examples go on and on. Our lives are now so intertwined with algorithms (think, even, of the programs that decide what you might watch next on Netflix or your route home on Waze) that there are few ways to escape them entirely. And fewer ways to escape their mistakes.

So profligate are these programs that some believe we’re living at the beginning of a so-called revolution in artificial intelligence, as the use of algorithmic computer programs expands into every facet of our society. If that’s the case, we’re in a bad spot to comprehend what a code-driven revolution might really mean. For, in a weird paradox, the more computers are asked to do, and the more they “learn” about the world they’re revolutionizing, the less we know about them.

Y2K should have made us question our faith in the machines. It may have had exactly the opposite effect.

Worse yet, even trying to learn might lead us nowhere. The programs may already be too impenetrable.

“There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right,” Will Knight wrote at the MIT Technology Review last year. He goes on:

This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

In the last two decades, we have continued unabated in building ever more complex computer systems. We’ve linked them together across a network that dwarfs, in complexity and scope, that which existed in 1999. Along the way, we’ve deepened our faith in the ability of computers to make our world a better place, but obscured our understanding of how they work or the unexpected consequences they might create.

Y2K should have made us question our faith in the machines. It may have had exactly the opposite effect.

We rely on computers, and their coding, for so much — to show our children the world and inform us about what goes on in it; to help decide who loses money and who makes it, or who goes to jail and who doesn’t; to drive our cars and heat our homes; to help us find jobs; to help our doctors find out what’s wrong with us.

But we don’t truly understand how any of it happens. We just believe that it will.

In 1909, 90 years before the Y2K scare, E.M. Forster imagined an ultra-connected world of the future. In his futureworld, humans live in solitary underground pods, interacting only via telephone and personal screens. They talk to their friends, attend and give lectures on the world, but they only rarely leave their pods. Their every need is catered to by an omniscient and powerful central technological entity, referred to only as the Machine. “How we have advanced, thanks to the Machine!” the humans say to one another.

But when the Machine starts to shut down, the humans are at a complete loss. For a time, they assure one another that the network collapse they’re witnessing around them is a temporary setback. The Machine, they believe, will fix itself. It doesn’t, and the instruction manual for the Machine that every human carries around contains instructions too simplified and surface-level to be of any help. As people send their alarmed complaints to the Central Committee, it gradually dawns on them that there’s nobody on the receiving end. It’s just the Machine built on the Machine built on the Machine, all the way down.

Unable to live any longer underground without the help of the broken Machine, the humans make their way to the Earth’s surface, and breathe the air they’ve been deprived of for so long.

Exposed to an environment unmediated by machines, the fresh air kills them.