Photo by Nadine Shaabana on Unsplash

The Algorithm Is Us

The latest Silicon Valley innovations are just people in computers’ clothing.

Wolfgang von Kempelen knew how to make an impression. He was introduced to the Viennese court when he was only twenty-one and talked himself into the job of translating the entire Hungarian civil code from Latin to German. His work was completed within a week. Kempelen quickly became a close advisor to the empress of Austria-Hungary, Maria Theresa.

The empress had an interest in the cutting edge of scientific discovery and was curious to know what Kempelen thought of the conjurers who appeared in her court. These men used a base of rudimentary chemistry and physics to perform “natural magic.” Maria Theresa wanted to know: What did Kempelen think of their performances?

He thought he could do better.

Sixth months later, in the spring of 1770, Kempelen wheeled in a machine that would dazzle and confound audiences for the next century. Even after Kempelen’s death, his apparatus would travel across Europe and eventually tour the yet-to-be-conceived United States of America. It would give private demonstrations to Napoleon Bonaparte, Benjamin Franklin, and Edgar Allan Poe. And it would take 87 years for the public to learn exactly how the device truly worked.

Kempelen’s invention consisted of a large cabinet with two doors in the front and a sliding drawer underneath. Seated behind the cabinet was a man, carved from wood, dressed in a turban and ornate Turkish clothing, with one arm resting on a cushion and the other holding a long pipe. In front of the Turk was a chess board.

A natural showman, Kempelen opened the doors to reveal a mass of gears and mechanisms. He pulled out the drawers to reveal more of the same. When he held a candle behind the cabinet, it shined through the mesh backing to show that there were no hidden compartments. Then he closed everything back up and invited one of Maria Theresa’s counts to play against the Turk.

Kempelen wound the machine using a crank behind the cabinet, then stepped away. The Turk’s head moved slightly, looking straight ahead at its opponent, then down at the chess board. Then its arm lifted from the cushion, hovered over a pawn, and eventually its wooden fingers clamped down around the tiny figurine. The Turk moved the piece ahead, its fingers released, and then it waited for the count to proceed.

Maria Theresa and the others had seen automata like this before. Even in the middle ages, clockmakers had developed elaborate statues that moved on the hour. Since then, their creations had grown more sophisticated. Earlier in the eighteenth-century, a Swiss clockmaker used a rotating spindle to essentially program the movements of an automata that could write, draw, and play the harp. Thirty-five years before Kempelen’s creation, a French inventor built an automata of a young man that could play the flute. The artificial boy had lungs made of bellows that would pump air through its windpipe and out its mouth. So as Kempelen’s Turk moved chess pieces around the board, Maria Theresa’s court was impressed, but not entirely surprised.

The shock came when the Turk won.

Kempelen’s demonstration was an instant success. Though he tried to distance himself from the machine afterwards and move onto other projects, Kempelen’s fame left him no choice but to continue showing the machine in other European courts. The Turk was disassembled and rebuilt many times, and passed to other owners for the next several decades. Audiences had seen automata programmed to carry out a circuit of playing music or writing, but no one had witnessed a machine that appeared to think like the Turk. Was this the next step—or, rather, a giant leap—in engineering capabilities?

British mathematician Charles Babbage was baffled by the Turk, but inspired to keep pushing the limits of what machines could accomplish. He theorized a device that he called the Difference Engine (then later amended his ambitions to pursue an Analytical Engine) that would use columns of wheels to make complex computations using a form of punchcard as instruction. Although Babbage never successfully built this machine, his work paved the way for the invention of computers in the following century.

But Kempelen’s Turk did not use any kind of programming or automation. His creation was no less ingenious, but it was built upon deception. The Turk’s doors and drawer were arranged to conceal a person hidden inside. Like a magician sawing a woman in half, Kempelen’s showmanship distracted the audience while someone else sat up within the cabinet. This chess master used a precise lever to control the Turk’s arm. When the opponent moved, small magnets would wiggle underneath the board, which the hidden player watched and tracked on a chess board of his own, lit by a candle, with the smoke rising out from behind the Turk’s turban, masked by the incense burning in its pipe.

The cutting edge of mechanization and engineering was nothing more than a person hiding inside a tiny, smoke-filled compartment. We would eventually build robots that could move pieces on a board and computers that could triumph over chess experts. But our belief in the ultimate power of machines obscured the contributions of real people along the way.

We know the name of the man who built the Turk. We don’t know the chess master hidden within the cabinet.


Perhaps the Turk provided the inspiration for Shel Silverstein’s “Homework Machine” (from A Light in the Attic). For all its impressive gears and lights, the true source of the contraption’s output is revealed to be an overworked little boy buried within a tangle of wires.

Or maybe the easiest way for parents to explain to their children how electronics work is simply to pretend that there’s a person hidden somewhere inside every gizmo. That was certainly Calvin’s dad’s strategy.

But the grownups know better. We know that everything has a computer in it now. Our cars, our refrigerators. Everything is connected. The Internet of Things. There’s no person hidden inside the box. It’s Alexa. Or Siri. (But probably not Cortana.)

We understand how our news feeds are curated, how our faces are picked out of photos, how self-driving cars navigate the roads. It’s artificial intelligence. It’s an algorithm. The solution to any tech problem is always a more sophisticated algorithm.

Except that’s not really the case. It turns out that Calvin’s dad, Shel Silverstein, and Wolfgang von Kempelen understood a fundamental truth about technology better than anyone.

They knew that humans would always do the dirty work.


In 2008, Facebook started an internal document outlining what types of content should be removed from the site. No posts with nudity. No gore (“nothing on the inside on the outside”). Back then, a handful of employees would review items that had been flagged by other users and take down anything deemed inappropriate. Now that Facebook has 2.23 billion monthly active users, that operation has grown.

Software can block some offensive words or links, but only in specific instances that can too easily be circumvented by determined users (see “pr0n”). When deciding how explicit content is, common sense and context matters—language used in hate speech, for example, could also be found in a credible news story—and for that task there are still no substitutes for real people.

Platforms like YouTube and Facebook have a tiered system of content moderation. First, potentially unwanted material is flagged by filters or users. These postings are then passed along to outsourced moderators, usually employing workers in developing countries like the Philippines or India who are offered as little as $300 per month to sort through hundreds, if not thousands, of pictures and videos every day. It’s impossible to know how many total third-party moderators are at work, but estimates are that it’s in the hundreds of thousands.

These employees work at long tables of computer banks, hidden within nondescript offices in towns like Bacoor, a half-hour outside of Manila. All day long they sit and stare at the worst things posted on the internet, confirming that yes, in fact, this is inappropriate. Vile. Disturbing. Videos of beheadings from terrorist organizations. Racist rants. Sexual exploitation of minors.

Content moderators are not just employed abroad. Material that is not clearly permissible or prohibited is passed along to a domestic tier of employees who can provide an American context to posts in the gray area. (Not that they necessarily do a very good job of this. Social networks are constantly under scrutiny for banning supposed “explicit” material, especially from women, like photos of mothers breastfeeding or menstrual blood.) Nonetheless, moderators in the U.S. are subjected to much of the same hateful and disgusting content as their counterparts overseas. These often young employees accept work with third-party monitoring companies hoping for a connection to future employment at places like Google and Microsoft. Instead, many of them are left with insomnia, anxiety, paranoia, and other P.T.S.D.-like symptoms.

As much as the giants of Silicon Valley would prefer to distance themselves from this waste-removal enterprise, they are well aware of the emotional toll that repeated exposure takes on moderators. Employees meet with counselors and take psychological exams. Yet burnout is extremely high and some content moderators in America have filed lawsuits alleging that the support they were offered was entirely insufficient. But what choice do these social media platforms have? If they are going to operate as public squares for everyone—and, especially, remain appealing destinations for advertisers—then they need to keep their networks clean of content that would turn users away.

What about those algorithms? Surely there is an automated response to the content moderation problem. Unfortunately, persistent users can still circumvent these filters, and in some cases the algorithms have made the problem even worse. Last year, a Medium article brought attention to the deluge of strange, creepy, and sometimes graphically violent children’s content on YouTube. These videos featured disembodied floating heads, Disney characters mutilated in escalators, animated children put in dryers, lots of crying, insane uses of English, and adults with pacifiers.

Not only was this clearly inappropriate content not caught by YouTube’s parental filters, the network’s own recommendation algorithm actively pushed viewers down an increasingly disturbing rabbit hole. Wired UK traced how sidebar recommendations led them from a popular Bob the Builder alphabet song to a cartoon of Minnie Mouse murdering a zombie version of herself in only a dozen clicks. Some of these troubling videos have view counts in the hundreds of millions. After the barrage of negative press, YouTube quickly removed many of the offenders, shut down channels, and banned some uploaders completely. The crackdown led to billions of accumulated views erased.

But the problem was only brought to YouTube’s attention by vigilant journalists and concerned users. The algorithms fanned the flames instead of stamping them out. There is simply no substitute for human supervision.

Mark Zuckerberg and Sheryl Sandberg went before Congress on separate occasions earlier this year to answer for Facebook’s role in allowing misinformation and bogus accounts to influence voters leading up to the presidential election. The hearings also touched on the proliferation of hate speech and bigotry throughout the site. Facebook needed to act. Sandberg told Congress that the company had dedicated 20,000 employees to blocking improper users and flagging offensive posts. In response to YouTube’s recent content problem, Google appointed 10,000 workers to video review duties. Both companies also claimed that they were improving internal algorithms to act as stronger filters, but when the problem needed immediate action, they assigned an army of actual people to clean things up.

The hope is that this surge of employees will aid the machine learning process and ultimately create an artificial intelligence that can provide a permanent solution. Maybe the software will eventually become fully autonomous. But until Silicon Valley’s prized contraptions are able to function completely on their own, there will continue to be a cadre of real live people, crammed amongst the gears and wires, propping up the entire apparatus.


Google Translate teaches itself new languages by studying previously translated works. Self-driving cars learn from complex modeling of existing traffic patterns. We’re helping to train the AI that will replace jobs and entire industries. But we’re also creating a hidden and undervalued class of workers to support these systems. We’re elevating the status of artificial intelligence, while diminishing the real humans doing the labor. Professional translators are not compensated for teaching their replacements. Truckers and cab drivers won’t be paid by the autonomous vehicles that make them obsolete. The machines probably won’t replace humanity, but they could very well replace the middle class.

Leaving those pulling cash from the ATM, and those toiling away behind it.