How silicon valley optimizes marginalization

Reneé W
Reneé W
Feb 24, 2017 · 5 min read

Build fast and optimize. This is a motto for silicon valley — & it feels benign, if not inspiring. But, in a series of recent articles & research into AI (often the holy grail of silicon valley) we see that by trying to automate and build fast & optimize — perhaps we omit the essential human morality that comes from stepping outside of a system & reflecting.

The aforementioned articles do well to articulate the problem with AI. Having primarily white men select the data to train AI results in heavily white biased outcomes. Optimization is left in this loop that progressively gets more & more white. Except, unlike a society that involves the occasional social justice warrior willing to raise concern over perpetuation of racism or sexism that is the inevitable course of a white patriarchy, there is generally no voice until OR unless that AI winds up being consumer facing. By marveling at the efficiency of big data, we fail to notice the bias inherent in the hands that direct & orchestrate these (literally) mindless systems.

AI is just one case study that is causing buzz — but, that is more geared towards prevention (which is great). Yet it does not address the concerns I have regarding the human cogs operating via the silicon valley machine at present. The term uncanny valley fits well with silicon valley in that much of the human contact you are able to find within this realm feels something nearly human, but not quite.

My exhibit A:

The past two days I spoke with customer service reps for AirBnB, a company that in many way epitomizes silicon valley start ups. They were incubated at the coveted Y combinator and quickly took off from there to reach a net worth of more than 30 billion. The irony of my story is that in most ways AirBnB was built around increasing authentic human interactions. On the podcast, “How I Built This”, the founder of AirBnB, Joe Gebbia, reveals his narrative of a college kid who just wanted to rent his living room to folks & show them around the city. A mutual benefit that causes net social gains — the antithesis of the sterile hotel environment. An idea based on core altruism & social trust that my midwestern self applauded & adored. So how did their customer service experience result in those eerie chills evoked from listening to Siri speak in slang or about emotions.

The transcript of pre-programmed & artificial empathy started immediately in the phone call to the AirBnB customer service — a number hidden deep in a decision tree of the “Help Center”. All sentences began like the self centered boyfriend/gf’s “I’m sorry you feel [insert your emotion here]” — -a means to acknowledge but mostly distance themselves from any culpability for that emotion. They spoke in terms such as “how a reasonable person would behave” and “as an objective viewer”. Lines that may have fared better if I were not a quantitative researcher in my 6th year (ugh, I know) of my PhD — at least proficient in objectivity and reason. I realized that these were words they’d been directed to use to make customers feel less confident in their feelings — most especially any marginalized population that has been conditioned their entire life to question if their thoughts fall within the middle bounds of the distribution of reason or have the god-like skill to think with objectivity (guess what, those folks are well suited to see life through the eyes of an object). My inner PhD self knew my complaint to airbnb was valid. & yet was paired against a android with a prespecified agenda & vocabulary who was optimized to not refund me. He kept redirecting conversation to groundless arguments as to whether plywood could be kicked in, rather than hearing my fears that resulted from finding myself in a space that had incurred multiple forced entries through the bedroom window without any prior notification to me. Much like the biased AI, I was falling into a whirlpool — draining me — into a void of mindless loops of optimizations intended to exhaust me & abandon my position as the victim. The argument of “one cannot control for random situations” came up several times. Displacing blame so firmly from themselves & making the victim feel guilt for ever having made the complaint — a common maneuver seen in sexual assault/harassment. One brought to light in silicon valley recently with Susan Fowler’s story about working at uber. My experience is of course not equivalent to sexual assault, but the methodology of how to subvert blame has strong similarities & is highly efficient. Having reached a stalemate with my uncanny valley representative, I decided to call back & voice my complaint with a new person. If this is a machine, more trials is my best bet. This second voice was technically different. A female this time. & yet the words echoed almost identical, albeit with a slight advance in reflecting my points back at me. I poured out my human case about real feelings of danger — & received canned sympathy. The revolving door was highlighted by the additional information that my case would always (no matter what) end up in the hands of my original automaton representative. This baffled me. In data logic, one would never create a closed loop like this — as it is a flaw for optimization. No matter if an additional automaton received better training data, it would never be able to influence the greater system (i.e the full customer service team & data). There was literally no way forward. No way out. I was always to be brought back to this one biased representative (data set named Cornell). This system was built to weed out those that are weakened by marginalization. He immediately identified my narrative of that of a victim & followed the designed path of how to make the victim feel absurd & blame themselves. & here I was, a tired female who had been through this before & felt ready to just give up.

Both AI and this silicon valley start up were both told optimization is the key to success. & so the customer service rep optimized his way in the same moral-less way that we fear AI might. When the encoded goal is to avoid conflict and mitigate a problem — one begins to quickly find the reward aligns with further marginalizing those that have been primed to see themselves as wrong (when I pushed back on this definition of “reasonable people” he said that it was the average person. How are we defining the average person? Is that really what we want?). We might not always have access or a voice in the trajectory of AI, but we do sometimes get the voice of the consumer. The voice of our dollar. We need to demand companies that are actively seeking equity in treatment towards it’s employees and customers. Creating closed loops that embed & reinforce marginalizing and discrediting of victims is not customer service. The motto of “the customer is always right” may be imperfect, but let us borrow from the judicial system — in saying that “the customer is right until PROVEN otherwise”. In research jargon, let the customer’s correctness be the null hypothesis.

Silicon valley start ups cannot continue to sacrifice human morals in their chase towards the mystic of perfect optimization. Or better yet, perhaps realize that one must optimize for the full spectrum of human experience with their apps or services, which is impossible enough to force engineers (or any person involved in the design or execution) to have moments to reflect — & be mindful, not mindless.