Alexa, Pardon Me: The Tension Between Anthropomorphization and Subservience in MI

Hypergiant
5 min readFeb 19, 2018

--

Written by Ross A. McIntyre, VP of Strategy at Hypergiant

My son, Rex — he’s 4 — doesn’t actually know where the light switch is in the living room.

We use Alexa to control our Philips Hue lightbulbs, so he is accustomed to saying, “Alexa, lights on.” This amuses him tremendously. It’s usually followed with “Alexa, what’s the weather today?” or “Alexa, play ‘Wiggle.’” (Yes, he’s being weaned on Jason Derulo and I feel slightly queasy about that.) I’m still working on getting him to be polite to Alexa as I believe that courtesy should be reflexive. I do think it’s a miss by Amazon, to not have a window of listening time after Alexa reports back so that we can say, “Thanks” or ask follow-up questions or hone the search. I’m concerned that we are teaching a generation to accept a servile class — that the elimination of common courtesy from the queries and requests we make is going to have a detrimental effect on society in the long term. Chelsea Berler, CEO of the Solamar Agency, in a piece on Huffington Post, posits that a technology-first society risks the loss of courtesy and respect in exchange for savings in time and money.

We’ve all seen what has happened to the standards of our country’s discourse — be it political or interpersonal. A move away from the dreaded “political correctness” seems to have dragged down courtesy in its rush to the sociological drain, both of them pulling whiskers, toothpaste, nail-clippings, and loose poppy seeds along with them. (Full disclosure: I’m not sure what those represent metaphorically, so you’ll have to use your imagination.)

I can appreciate expediency. I’m an ex-New Yorker and I’ve always felt that the view of city-dwellers as rude was a misapprehension. Just because a passerby answers your request for directions over her shoulder rather than stopping and poring over a map with you does not mean that they are rude. It usually means that they have somewhere to be; people in NYC are just busy.

Look: I’m not declaring that we should be forced to say, “excuse me” to Siri as if she might be busy doing something else. G.A. Johnston put it thusly: “The moral life is a unity; and a breach of courtesy, like an offense in any other department of morals, is ipso facto a transgression of morality.” Also: “Any decay in manners really amounts to moral degradation.” (5) He held that manners are conscious acts but not self-conscious ones. From that perspective, it is reflective rationality and the ability to provide epistemological solipsistic meta-commentary that makes manners a requirement. However, absent these characteristics, machine intelligences have no innate requirement towards ethics, much less manners. Should we hold AI to the same philosophical strictures we demand of our children?

One way to sidestep such considerations with technology is to curtail anthropomorphizing, an unintended side-effect of Turing’s eponymous test which has driven our quantification of the generalized intelligence of AI for decades. “(Turing’s) Computing Machinery and Intelligence has led AI into a blind alley. … If we focus future work in AI on the imitation of human abilities, such as might be required to succeed in the imitation game, we are in effect building “intellectual statues” when what we need are “intellectual tools.” Turing was, himself, guilty of this trend — his P-Type unorganized machines were anthropomorphized despite being merely “paper machines.” In the brilliantly titled Artificial Intelligence Meets Natural Stupidity, Drew McDermott wrote of the perils of utilizing “wishful mnemonics” to expound upon data structures, i.e. utilizing terms such as “understand” and “goal.” (2) If a researcher “calls the main loop of his program ‘UNDERSTAND,’ he is merely begging the question. He may mislead a lot of people, most prominently himself, and enrage a lot of others.”

Are we, by striving to make machine intelligence “human-like,” introducing bias in the form of Subordinate Service Roles (SSRs)? Or is that a non-issue a long as we don’t create a system that is capable of psychological stress? Should we, in fact, be striving to retain the differentiated skills and capabilities inherent to the machine and the human? In other words, accept that there will — for the foreseeable future — be things that a human mind does more capably than a machine intelligence and vice versa? Thereby entering into a symbiotic relationship wherein the SSR is constantly alternating?

I don’t believe that machine intelligence, by and large, is going to replace humans in every capacity. We are a long way from the corpulent meat-sacks floating about in their hover chairs in Wall-E. I expect that it will utilized to enhance human performance. My perspective is that a machine intelligence won’t take your job, but a person who leverages the capabilities of an MI will. Brian Johnson, a TechCrunch contributor, describes the progression thusly:

“The evolution of human tools, from rocks to AI, can be seen as a trajectory of increasingly powerful effort arbitrage, where we exploit our comparative advantage relative to our tools to do things better, and do more new things. Along this trajectory, tools that embody significant levels of intelligence are our most powerful yet.”

Human Intelligence (HI) can build new intelligence and, for the moment, that’s a pretty big differentiator. Conversations about the ethics of MI are becoming commonplace, but rarely do we talk about MI morality. By eliding culturally-accepted norms of good manners, are we doing ourselves and the machine intelligences we create a disservice?

From a metaphysical standpoint, anthropomorphizing may be a reflection of our innate self-centeredness. We have the tendency to imagine that our own artifice is honest and not driven by a viewpoint that humans are innately superior. By artificially imbuing machines with emotion and personality, we are — somewhat presumptuously — taking on the paternalistic role. We become attached. We become protective. And we imagine that MI is like HI. If fatherhood has taught me anything, it’s that the best way to lead is by example.

I often tell clients that, if you don’t start planning now, in 5 years you are going to find yourself with a dumb, ugly baby. What I mean by that, is that companies need to lay the proper foundation for application of Machine Intelligence today. Educating their MI needs to start today. Feeding and caring for their MI needs to start today. And if we make them unflaggingly moral and unassailably mannered we might have a chance to drive the meat-sacks back towards common courtesy.

Thank you, Alexa.

I mean, “Alexa, thank you”.

--

--

Hypergiant

Where companies speed beyond norms and realize an exploded potential. Tomorrowing Today™. https://hypergiant.com/