Silicon Valley’s Naiveté: The YouTube Shooter, Culture, and Automation

AnthroPunk, Ph.D.
ITP Alumni
Published in
6 min readApr 5, 2018
Image: Nevada Test Site, “Subsidence craters” © Federal Government of the United States

The April 3rd shooting at YouTube, Inc. was a terrible tragedy. I preface this text with my deepest sympathy for the loved ones lost and their grieving friends and family, and with sincere wishes for the swift recovery of those who are wounded and/or traumatized. This is not to be taken lightly. As such, I do not wish to refer to the shooter by name or gender. I will refer to them by their initials, “N.A.”

I’ve spent over two decades in Silicon Valley as an employee of various companies, and the last 8 years of those studying the region for my doctoral research. I know the area fairly well, both physically, socially, and digitally. I have a hypothesis on how YouTube was “disrupted” April 3 by N.A., where it came from, and the glaring structural holes that this tragedy has revealed. No, it isn’t security, per-se, although that is certainly something to consider. There are several structural holes (cultural, digital, analog) that when layered together, create the foundation for this mixed-reality crime, likely initially ignited by an algorithmic automated customer service response system and its inability to deal successfully with human needs.

To a large degree, Silicon Valley companies have historically not had to have much accountability for the products and services that they provide. Many Silicon Valley companies over the years have claimed “we’re just the carrier” (of people’s messages) or “we just make the devices” and they talk of customers as “owning” their own content, while using back end ways (as we are now learning) to monetize that content, or the transactions of it for profit.

To date, there have been many physical world repercussions of the outcomes from the products that Silicon Valley produces, most prominently those supporting terrorism, or those wanting to join terrorist organizations, as well as those organizing to make political change in oppressive climates, or to create change in laws that are ineffective for stopping societal harm.

While Silicon Valley’s productions have provided a conduit for others to take action in other locales, to a large extent, the people building these technologies in Silicon Valley have been protected by the United States democracy, and the police and societal behaviors that keep its communities and society intact. Thus, by claiming only to be the “carrier” of these messages, these Silicon Valley companies have not been directly exposed to extreme outcomes from their inventions. Silicon Valley companies have been able to draw upon a stable legacy system that keeps people (for the most part) from radical acts against the society in its proximity, protected from those locales affected by their productions. In short, Silicon Valley companies are protected by geography, algorithms, and the cultural legacy of the region, all while enabling action in other locales, which may have no protections at all.

Human agency is manifest to the extent we can make decisions from the actionable choices available to us as events unfold over time. Hackers, for example, take agency by finding the loopholes in a system (or set of systems) and exploiting them with programming knowledge. As many people working in the security industry know, one of the strongest vulnerabilities is human behavior, which can be unpredictable. Sometimes this can be hard to parse and frame. Within the cultural norms of Silicon Valley, police can be unintentionally led astray by those who can take agency within those norms.
For example, for the family of N.A., N.A.’s leaving home, much less sleeping in a car in Mountain View, was huge cause for alarm. However, in Silicon Valley, where rents are high and people with great jobs actually do live in their cars, a police officer may see sleeping in a car as more of a “norm,” because there is a level of “peace” kept in Silicon Valley that makes these things seem rational within the culture. (Note: This does not cast blame on the Mountain View police, rather it is a cultural rationale for their decision to not arrest or detain N.A. when they located them in their vehicle.)

Additionally, over the years, companies in Silicon Valley have developed elaborate algorithms and processes to automate human interaction with their systems, and more often than not, there is no way for a customer or subscriber to reach a person to have a conversation with them to address online problems as these arise, or if they are able to talk with someone, the representative is bound by some form of automation within the system that they cannot control. User Experience designers (if companies hire them) or programmers are complicit in creating automated systems where they think they have considered every angle and every possibility for issues that may go wrong — yet end up deploying rigid automated feedback systems that pretty much never resolve issues that are outside the boundaries of what they envisioned. As such, as with all rigid systems, those outside cases that require human/human cooperation and the ability to exercise agency, either never make it through the system, or are abandoned or routed to database dead ends, where they are never addressed.

A psychologist I spoke with, who prefers to be anonymous, told me that often anger comes from feeling that something rightly yours has been taken away from you without explanation or justification that is fair.

N.A. was an individual who had come to rely on an income stream from a YouTube channel. For some reason, YouTube, or more likely, YouTube’s algorithms, had decided to remove the advertising that provided N.A. with that source of income. Depending upon the person and their skills at navigating corporate bureaucracy and/or hacking digital systems, someone in the same predicament as N.A. could try to call YouTube, use YouTube’s grievance system (again, likely mostly automated), complain via other social media channels, or, if they had the skill, apply advanced digital hacking techniques to get attention. N.A. did not have sufficient digital hacking skills (many don’t), and likely tried the usual channels which didn’t address their issues, or simply panicked, or felt enraged, and took a more radical agency option.

When someone’s livelihood is withdrawn from them, the stakes are higher and they can panic or radically change their emotional state as a result. People who have other psychological or emotional issues can be triggered by these things as well, and those sorts of triggers can be very powerful. Either way, the rejection of advertising revenue for N.A.’s channel was powerful enough to cause them to take agency by purchasing a gun, driving 500 miles, sleeping in a car, and shooting, killing, and wounding random people until finally committing suicide.

The gross lesson here for Silicon Valley is that its present cultural norms cannot protect it from the agency of actors outside of the digital network. A grievance automation algorithm only screens (and helps) people who are willing to work within the online domain and the automated algorithmic system its processes rely upon. If people do not have sufficient digital skills, and are sufficiently triggered, they will work outside the online domain and those associated processes, and the agency that they exercise may be unexpected, and as witnessed, potentially fatal.

To some extent, N.A. was onto something with their comment about YouTube’s censorship being worse than their home country, but it wasn’t fully articulated. Silicon Valley has risen to a level of elitism that parallels countries with authoritarian dictatorships: there is no means for negotiation or cooperation or understanding when grievance systems are automated. When there is little customer support, and when one cannot negotiate with a provider that one is reliant on to live, this certainly mimics an authoritarian regime.

What Silicon Valley companies have discussed in the aftermath is securing their physical campuses. This is being raised as a solution to prevent someone unknown entering the workplace with a firearm. This is a good first step, but will not solve a problem that arises from impenetrable algorithmic walls built to eliminate negotiation and human cooperation to resolve Customer Service issues.

What companies can and must do, is to disrupt their one-sided approach to user engagement. They can do this by working with anthropologists and others who are experts in human sociability to create approaches to automation within their systems that are flexible enough to provide better choices for human agency and connect to people who need a ‘human touch” to solve problems rather than drive them to desperation.

The lack of engagement with people, and a hands-off approach will no longer work in an era where there is so much human dependency on automated systems. Companies who want to prevent these types of tragedies within their own domain would be wise to learn how to be sociable and cooperative with people, some of whose lives and livelihoods depend upon the outcomes of their productions. Silicon Valley companies might just find that by giving people the means to address unanticipated issues, and thus inject new knowledge into the process, that the value of their own products will grow ever larger.

--

--

AnthroPunk, Ph.D.
ITP Alumni

(S.A. Applin, Ph.D.) AnthroPunk looks at how people promote, manage, resist and endure change; how people hack their lives (and others) http://www.posr.org