Chickens lay eggs in a variety of sizes. We like to sell them in uniform boxes of small, medium, and large eggs.

Before machines, imagine how tedious it must have been for poultry-men and poultry-women to sort hundreds of eggs each day. “How lucky is the farmer who uses mules and plows,” they would say, “and how lucky is the miller, whose job is done by the mill.”

“How else would we fill egg boxes with eggs of the correct size” they’d conclude, nodding at each other, “if we didn’t check them ourselves?”

For some, this nod would come with the sad acceptance of being condemned to a Sisyphean task. For others, it came with the dignity of doing a job that resisted the machines.

As often happens, boredom led to inspiration. I like to think that, one morning, after the 200th egg of the day, a chicken farmer had the following realization.

“Yes, we humans are really good at sorting eggs, but we are not necessary at all! Nature itself can distinguish small, medium, and large eggs. All we need to do is build a machine that allows nature to make the decision, in the same way that it decides an apple should leave the branch it is attached to and fall to the ground.”

While pondering this proposition, the chicken farmer would hold a pair of eggs of different sizes, one in each hand, and rhythmically throw them up in the air. Given his experience, he’d be able to throw and catch them without looking.

Eureka! Suddenly he would realize the size of eggs was directly related to another property machines could deal with more easily: their weight. At last, there was a way to sort eggs without looking at them. He would then assemble an inclined plane, three seesaw swings (with a different weight at each end), and a conveyor belt into the first ever Egg-Sorting Machine. The conveyor belt would bring each egg to the swings, in turns. Whenever an egg’s weight was greater than that of the seesaw it was resting on, it would be deposited onto the inclined plane and roll into a group of similarly sized eggs.

Behold: Artificial intelligence!

Not only was the machine doing a task only humans were deemed capable of, it was completing that task with speed and accuracy far greater than a farmer could manage. Some farmers would receive this machine with unrestrained enthusiasm, others with skepticism, disappointment, and fear:

“What a devilish machine!”

“Unless it takes the eggs from the nest to the box, it is of no use to me.”

“You are going to put farmers out of work.”

“Hand-sorted eggs are surely better; people will see the difference.”

“Machine-sorted eggs will alienate farmers from their work.”

“What if the machine goes on a rampage? What if the controls break and the machine sorts eggs faster than you can unload it, the inclined plane collapses, and you are killed by an eggslide?”

You might be amused by the egg-sorting-machine doomsayers. You might even admit the possibility of death by eggslide while still questioning claims of this machine’s intelligence.

A growing number of people are worried about the moment machines will become superior to human beings. This moment is named the “technological singularity.” Elon Musk has said that A.I. is “a fundamental existential risk for human civilization” and that there is a concrete risk of “robots going down the street killing people.”

Our greatest fear is that the machines, even before subduing us physically, will have beaten us intellectually.

But an important distinction must be made. The singularity is not menacing because machines could kill people in city streets or anywhere else — that has been happening since the first industrial revolution — but rather because they would have the will to do it, on top of the capacity to do it effortlessly, like bandits “going down the street” in a Western movie, spreading violence and bullets against helpless peasants. Our greatest fear is that the machines, even before subduing us physically, will have beaten us intellectually. That they will look at us as we look at cockroaches today.

I do not dispute the possibility of individuals falling victim to machines in the near future. However, I believe such an event, however scary, would not to be so different in nature from death by a falling rock eggslide. And I believe claims of machines wanting to kill us are as intellectually valid as claims of the egg-sorting machine wanting to kill the chicken farmer.

Every time a machine accomplishes a task we once believed to be inherently human, we reignite the debate around the promise and threat of A.I. The recent victories of AlphaGo — a DeepMind-built machine that has laid waste of the highest-ranking professional Go players in China, Korea, and Japan — bear this out.

The game of Go had resisted machines for two decades after the capitulation of chess. Because of this, it was regarded as an archetypal human game. Go was just too big for conventional machines to tackle, we thought. What human intuition gathers instantly from a board position would take machines entire minutes to discern.

The defeat of Lee Sedol in March 2016 changed the game entirely.

“Now that machines have surpassed us in Go,” I’ve heard, “it won’t be long until they surpass us everywhere.”

Every time a machine accomplishes a task we once believed to be inherently human, we reignite the debate around the promise and threat of A.I.

Similar reactions probably followed Deep Blue versus Kasparov in 1996, when a computer won against a human for the first time in a regular game. One chess master who watched the game described Kasparov as a cowering loser: “Look at him, shaking his head under the cold, cold attack of the computer. I wish he could pull a rabbit out of his hat, but I’m afraid the rabbit’s dead.”

A computer beating humans at a game which had been one of the battlefields of the Cold War promised reckoning in human society. Today, however, we play against computer chess programs on smartphones that are greatly superior to the IBM machine every day — and we do so without fear or intimidation. In fact, we do not believe our smartphones to be smart at all, despite them being orders of magnitude more powerful and pervasive than laboratory computers of the ’90s.

Some may argue that AlphaGo is a much more complicated machine than Deep Blue and that the two are incomparable. While it’s undeniable that the Go-playing machine is much better than the chess-playing one, they are not incomparable. We know exactly how much more complicated AlphaGo is than Deep Blue (and how much more complicated it is than an egg sorting machine). What we don’t know is the distance between AlphaGo and the human intellect.

Every machine we’ve ever built is just a variation of an egg-sorting machine. Regardless of how many such machines we create or how many layers deep they become, what we get is still an egg-sorting machine — albeit a very complicated one.

As technology progresses, machines will continue to solve problems better than we can. It’s not beyond reason that our generation will see machine write an award-winning novel (without needing Borges’ infinite library to store random aggregates of characters produced as a byproduct).

But the greatest remaining divide between us and machines is not the gap in how correctly or efficiently we can solve a given problem. It’s our ability to find the problem in the first place. This distance between machine and human will not get any smaller — not because we’ll be able to conserve superiority, but because we won’t stop feeling boredom, irritation, stress, anxiety, and anger.

These emotions, commonly regarded as negative, will always be the greatest catalysts for change. They force a feeling of unease and restlessness on us, making us hate the status quo we once loved. An automated writer might be successful, but will it ever stop writing to reflect on itself and decide to pursue a new style? Why should it?

The greatest remaining divide between us and machines is not the gap in how correctly or efficiently we can solve a given problem. It’s our ability to find the problem in the first place.

Before concluding, I must make the following admission: Hidden behind my argument is another outcome which, however improbable, I cannot exclude.

I firmly believe that increasing the technical complexity of our machines will not produce anything that separates itself significantly from those machines. However, what if we humans are reachable along the path of this growth? What if a sufficiently complicated machine will be indistinguishable from a human — not because machines will have jumped over the distance that separates us, but because the separation was never there? We ourselves would be self-replicating machines built and left behind, our idiosyncrasies results of molecular fluctuations.

If that is indeed the case, a future iteration of the egg-sorting machine may just wake up one day and question the necessity of sorting eggs at all.