Drink Before Your Surgery, Says Google

Jonas Persson
6 min readFeb 6, 2018

--

“Death and the Apothecary” by Thomas Rowlandson (1757–1827)

We all Google our symptoms, but it can be difficult to tell the the quack from the trailblazing physician, and the religious cult from the cutting-edge clinic.

At first glance, the website for the American College of Pediatricians has all the trappings of a professional and authoritative trade association, just like the similarly-named American Academy of Pediatricians. And yet, the former is an anti-LGBT organization that advocates for so-called “conversion therapy” and is designated as a hate group by the Southern Poverty Law Center.

Even if we confine ourselves to reading peer-reviewed medical journal articles, it is easy to get confused or freaked out. Published case studies do not, after all, describe the common course of common illnesses (they would not be very interesting if they did.) Instead, you will find all kinds of syndromes — as deadly as they are obscure, hyphenated, and eponymous — masquerading as, say, the common cold or pneumonia.

Teaming up with Medical Experts…

With all the misleading and potentially dangerous information a mouse click away, Google has teamed up with the medical community and offer curated information about common illnesses. If you search for, psoriasis you get an expandable info box with all kinds of useful information on the skin condition, including various treatment. All of this has been reviewed by medical experts.

This is all well and good, but if you search for medical information that is not specifically about diseases things get more dicey. In many cases, Google throws caution to the winds and lets its spiders and algorithm don the white coats — sometimes with disastrous results.

…Only to Wing It

Last year, I asked Google if it was OK to have a glass of wine or two in the drink days leading up to surgery. Not only was it OK, Google blithely informed me, it was mandatory. I should drink alcohol “at least two days before” my surgery:

The views and opinions expressed here may or may not be those of Google’s.

Google even provided the source for this unorthodox recommendation: Nebraska Medicine. Did a reputable medical organization really recommend that people booze up before going under the knife? After all, this is not the 1600s; there are safe and painless anesthetics, and there is no need to chug a bottle of gin and chew on a leather strip to numb the pain.

It turned out that Nebraska Health actually warned against drinking before surgery. Clicking on the link opened a bullet list prefixed by a large “DO NOT.” No one looking at the original web page would miss the conspicuous negation. No human, I should add. The algorithm did. And while the Google snippets are often remarkably pertinent, the software that extracts them from the web is far from perfect.

There are frequent hiccups, as when Google confused Jerry Lee Lewis with his almost-namesake Jerry Lewis, and assumed that the former was dead. Or when New York Times writer Rachel Abrams found out about her 2013 demise in a Google “Knowledge Graph” (the panel that shows up on the right side of the search window with information culled from various sources). Like Mark Twain before them, they took these snafus in good stride. When it comes to medical information, however, things are more serious, and there is a real risk that people will get hurt

Equally problematic, the analyst with Nebraska Medicine, whom I got in touch with to point out that Google had mangled their web page, had no way of contacting Google other than the Feedback button, which submits a report that probably will not be read by a human until the number of reports for the same issue has reached a critical mass. In the end, the analyst had to scramble to change the original page in the hope that Google’s spiders would discover the update on their next crawling tour.

Guns kill people, so do algorithms

If there is a silver lining to the developing Russia story about electoral manipulation, it is that we now have a robust and productive discussion about how vulnerable algorithms are — from Facebook’s news feed to YouTube’s recommendations — and how “bad actors” can game the system. We are also becoming increasingly aware of the problem with incomplete or biased (often racially so) data sets used to train the algorithms. When Google launched an automatic system to tag pictures back in 2015, a black programmer was shocked to discover that he and a friend had been tagged as “gorillas.”

In the case of manipulated news feeds or racist tags we tend to put the blame squarely on the people who gamed the system or on those who chose the data set (often blinded by white STEM privilege) used to train it. When Microsoft’s Twitter AI chat-bot turned into a raving holocaust-denier, we collectively shook our heads at the trolls who baited it.

Garbage in, garbage out. The algorithm itself is above the fray. If it is not a force for good, it is — at the very least — a neutral vehicle that can be used for good and bad. Where have we heard this before? It sounds very much like the NRA cop-out: “Guns don’t kill, people do.” Except guns do. Between 2014 and 2016, 1,000 children were killed or injured in freak gun accidents. And there is a “robust correlation” between high levels of gun ownership and high homicide rates. Guns affect our behavior. They do kill people.

Liberals are good at debunking the flimsy NRA narrative, but often refuse to question the idea of algorithmic neutrality. This is understandable. Tech entrepreneurs are prodigious donors to the Democratic Party and its candidates. And besides, who wants to be branded a Luddite for questioning the march of progress?

And yet, there are two major problems with the current paradigm of AI applications. These have to do with the algorithms themselves, and not with the data — biased or not — we use for training purposes.

First, these applications (often so-called neural networks) are not politically agnostic; they are inherently conservative. They predict and shape the future by regurgitating patterns from the past. Or in the words of Cathy O’Neil:

“Automated systems … stay stuck in time until engineers dive in to change them. If a Big Data college application model had established itself in the early 1960s, we still wouldn’t have many women going to college, because it would have been trained largely on successful men … The University of Alabama’s football team, needless to say, would still be lily white. Big Data processes codify the past. They do not invent the future.” (Weapons of Math Destruction, pp. 203–204)

A second problem (and this was clearly the issue here) has to do with the fact that these systems lack common sense. They excel at finding statistical correlations — often with remarkable results — but have no conceptual knowledge of the world. With all the hype surrounding AI, let us not forget how stupid these systems can be.

No human in their right mind would recommend boozing up before surgery; no market consultant would tell would-be advertisers to cater to the demographic of “jew haters.

And yet, to Google’s and Facebook’s algorithms, these things made perfect sense.

During the congressional hearings in April, Mark Zuckerberg touted “AI” as a panacea for protecting against foreign interference and moderating content. But he tacitly admitted that the current technology is not up to snuff, and committed to hiring 10,000 human moderators.

When will Google come to the same realization, that is, that their algorithms are woefully inadequate when it comes to medical and sensitive topics. Lives are at risk.

--

--

Jonas Persson

Gadget-loving Luddite. Rabbit-hole tumbler. Schlemiel. Law student. Poetry, privacy, politics.