Stanford’s newly launched Institute for Human-Centered Artificial Intelligence: Which humans is it for?
On Tuesday, tech and political elites descended on Stanford University to launch a new center dedicated to “human-centered artificial intelligence.” Governor and presidential hopeful Gavin Newsom joined Bill Gates, ex-Yahoo CEO Marissa Mayer, LinkedIn founder Reid Hoffman, former Google CEO Eric Schmidt, Google.ai head Jeff Dean, and Deepmind CEO Demis Hassabis at the high profile, live streamed event. All lined up to support this initiative with speeches and panels devoted to the ethical future of artificial intelligence.
Also in attendance: Henry Kissinger and Condoleezza Rice, stewards of the military-industrial complex, architects of wars in Vietnam and Iraq respectively, and long-time Stanford power players. The two joined the leaders of the new center, Dr. Fei-Fei Li and John Etchemendy, in a private, undisclosed dinner.
This coming together of political and tech luminaries warrants a closer look. It’s worth considering why so many of the most influential people in the Valley decided to align with this center and publicly support it, and why this center aims to raise $1 billion to further its efforts. What does this center offer such a powerful group of people?
Peace of mind, in short. Over the past year, efforts by tech workers to organize against their own companies came to a head. Salesforce workers organized to stop the company working with Customs and Border Protection, and Palantir employees asked it to stop working with Immigration and Customs Enforcement. Microsoft workers want the company to stop developing virtual reality for the military.
At Google, thousands of workers wrote a letter against artificial intelligence for drones in the military, organized against a censored search engine for China, and walked out en masse to protest generous payoffs for executives accused of sexual assault. In response to unrest, executives have responded again and again with appeals to ethics principles and internal review boards. These black boxed processes substitute broad ideals for democratic accountability, out of touch with how complex technologies already impact our lives.
Enter the Stanford center. By centering “ethical artificial intelligence,” these elites hope society will pay less attention to the military contracts, the development of surveillance technology, and the perverse incentives of capitalism. Instead, ethical AI turns the public to “a better future for humanity through AI.” Through AI. The center reframes the debate not to existing harms but to how artificial intelligence can potentially augment and improve human lives. We’ll think less about how new artificial intelligence technology might decimate traditional work; we’ll focus instead on how many trillions it will add to the economy.
And where ethics proves insufficient to placate concerned citizens, Silicon Valley champions nationalism. They warn about the dangers of China’s rapid progress as a means to continue the uncurtailed development of their products. In reality, the only real danger here is these individuals not making their next billion, what matters to the people running the AI center is not the same thing that impacts the lives of Uber drivers sleeping in their cars or Doordash drivers whose tips are being used to fill in their basic pay.
The same executives who have been hammered for creating products that increase the lethality of soldiers or platforms that radicalize viewers will now “guide AI so that it has a positive impact on our planet, our nations, our communities, our families and our lives.” The same executives who refuse to recognize contract workers’ demands for living wages promise that “this new era can bring us closer to our shared dream of creating a better future for all of humanity.”
The people who spoke at the launch event and the staff at the Center are the people who created the problems we face, and the problems we face have only been brought to light by diligent journalists, tech worker organizers, and communities affected by these technologies. The composition of the Advisory Council is telling. Nearly half of the sixteen-member advisory council works in venture and growth capital firms. The remainder are mostly current and former CEOs and CTOs and all but the ex-IBM CEO are trained as computer scientists and electrical engineers. Is putting engineers and venture capitalists in charge of AI and society initiatives a bit like putting the fox in charge of the henhouse? Where are those who have been consistently and blowing the whistle and patiently explaining the problems AI poses for human rights, labor, and criminalized? Stanford sports a lackluster record in producing knowledge about ethical AI. We have no reason to have confidence in them, even in Silicon Valley where some people are entitled to serially fail and still attract investment and support. The composition of the Advisory Council suggests that the Center serves the needs of investors and industrialists assessing risks and opportunities rather than the needs of society.
In competition with this Center and its objectives are the tech workers, who are organizing against the military industrial complex and capitalist structures that leave many workers, including those working for prestigious companies like Facebook, Uber, and Google, in poverty. They won’t be hosting events with Governor Newsom and Bill Gates, but they are legion. If the #TechWontBuildIt movement continues to grow, workers may walk out for longer periods, and disrupt business as usual. The prospect of tech workers deploying traditional labor tactics strikes fear into the minds of the Stanford cabal.
It’s unclear right now who will win but what’s clear is that this is as much about messaging as it is about technology. It’s a war of words between company management and the workers who build the technology. From this weeks display of power, it’s clear what side the Stanford AI center is on.