Google Imagines AI Exists in a Values-neutral World, Promises Great Things as Long as Bad Things Never Happen

Peter Banks
The Trouble with Work
6 min readNov 28, 2023

--

Last week, Google published a white paper entitled AI Opportunity Agenda, outlining what the company sees as essential steps to ensure that AI is properly used in societies around the world. It is a bland, largely useless paper that doesn’t begin to address anything of significance, while at the same time insisting that AI needs to be universally adopted and guided by “policy.”

The lead, which gets stuffed way in the middle of the paper, is that no one really has any idea how AI will impact human society, and work, in particular. The paper expounds beliefs about what AI could do on a large scale, and small scale evidence of its positive benefits (though no talk about its failures, which I’m sure there are).

“We are still in the process of understanding what kinds of new skills AI-enabled work will require… But there are other open questions about AI’s impact on work that will need further study, such as how AI can best be used to support re-skilling, and how to minimize the risk of “skill atrophy” as routine tasks that previously provided training opportunities for novice employees are increasingly automated. Companies, civil society, and policymakers will need to constantly evolve skilling programs to address these questions and manage these transitions.”

There is an assumption, which I did not include above, that AI will increase the need for “critical thinking, cross-disciplinary problem-solving, effective collaboration, and empathy” among human beings, though I still don’t see how AI will not subsume those efforts. A technology that constantly learns, makes fast, probabilistic decisions, will quickly take over those humans, who, if anyone has sat on a multi-disciplinary decision-making board knows, don’t function well.

They do manage to insert empathy in this paragraph, but because this isn’t a foundational part of Google’s vision, tells me that empathy is far down on the list of things that are important for this technology.

The paper reaches this point after making absurd comparisons between AI and other technologies, most notably writing, the wheel, and the steam engine. However, Google also notes the importance of heavily regulating AI before it becomes ubiquitous.

This doesn’t sound like the wheel.

Instead, AI sounds more like the pharmaceutical industry — something that has a fair number of positive benefits, while at the same time having huge, dangerous downsides in many others and requiring a mammoth regulatory establishment to limit those negative impacts.

But Google wants everyone to be skilled in AI.

They seem to have an idea of what being skilled in AI means. They refer to “democratizing” certain tasks, notably writing, language, and coding. By democratization, they mean people with no abilities in an area, such as writing, being able to use AI to write for them, and thus having the same ability as the person who can write well. The paper is very explicit about this.

“AI is already helping democratize access to skills and expertise such as coding, language and writing skills, and promises to enable more people to use productivity strategies that were once the exclusive provenance of workers at the top of the income ladder.”

I don’t know about you, but this doesn’t sound great. This sounds like the dumbing down entire populations so that thinking is no longer necessary. By couching writing and language skills as elitist, AI can be framed as a real leveler of the playing field.

So, we have a technology that is unpredictable, could be used for nefarious purposes, and yet is being portrayed as the potential great savior.

I am dubious, as you might have been able to guess.

The use of AI would probably be great for things like disease control and understanding weather and traffic patterns. What I don’t want is for AI to usurp thinking. Google wants us to believe that it won’t, that it will in fact allow us to focus our time on different avenues. But, I think it’s clear that AI will quickly destroy most thinking work and will be heavily relied upon for almost all decisions.

We must remember that there are tens, perhaps hundreds of companies, maybe thousands around the world that are trying to create AI systems already, with the backing of state actors. There is really nothing of substance in place to regulate development. In the US, we’ve got a Presidential decree that can be thrown out at any moment.

We’re still at the beginning of this entire process and we are moving naively. While Google urges us to look at other technology’s development and world impact, it doesn’t specify what technologies we might look at (besides maybe the wheel and writing) to develop “better systems” to cope with AI’s rise.

We don’t need to study the “future”, which we can’t really do. What we can begin to do is to capture data on the impact of AI right now on people’s lives, which includes how they work. We can also study how it impacts our thinking. We don’t need to wait. Every data point we can collect is valuable. We don’t need to cherry-pick information that makes AI out to be wondrous. We need to move fast because we will soon be so far behind policy-wise that we won’t ever be able to catch up.

I decided to ask everybody’s favorite machine learning/Ai system, Chat GPT, for its take on AI, or rather, how AI will destroy humanity. Yes, it’s a loaded question. But I think it did a pretty good job answering in a way that shows empathy. The question and response are below. And, wow, threats to workers are just the tip of the iceberg.

ChatGPT showed the up side the Google paper spoke about by creating better content than the paper itself. It more succinctly outlines AI’s threats while glossing over what to do about them in the same way Google did. So why didn’t Google just have ChatGPT write this white paper?

Q: How will AI Destroy Humanity?

ChatGPT:

AI has the potential to greatly impact society, both positively and negatively, but it’s essential to note that its destruction of human society is not an inevitable outcome. However, there are concerns and risks associated with AI that, if not properly managed, could lead to negative consequences. Some potential risks include:

1. Job Displacement: Automation driven by AI could lead to job displacement in certain sectors, impacting livelihoods and causing economic disruption if adequate measures aren’t taken to retrain the workforce.

2. Bias and Discrimination: AI systems can inherit biases present in the data they are trained on, leading to discriminatory outcomes, particularly in areas like hiring, lending, and law enforcement. If left unaddressed, this could exacerbate social inequalities.

3. Privacy Concerns: AI technologies often rely on vast amounts of data. If not appropriately regulated, the collection and use of personal data by AI systems could result in widespread privacy violations.

4. Autonomous Weapons: The development of autonomous weapons powered by AI raises ethical concerns about the potential for these weapons to operate without adequate human control, leading to unintended harm or escalation of conflicts.

5. Social Manipulation: AI-powered algorithms used in social media and other platforms have the potential to manipulate opinions, spread misinformation, and polarize societies if not regulated or monitored properly.

6. Ethical Dilemmas: As AI becomes more advanced, it raises ethical questions about the implications of creating systems that mimic human intelligence, consciousness, and decision-making capabilities.

7. Dependency and Security Risks: Over-reliance on AI systems without robust fail-safes could make societies vulnerable to cyber threats, hacking, and system failures, leading to significant disruptions.

It’s important to recognize that these risks are not predetermined outcomes but potential consequences that can be mitigated through responsible development, governance, and ethical considerations. Many experts advocate for the development of robust regulations, ethical guidelines, and proactive measures to address these risks and harness the potential benefits of AI for the betterment of society. Ethical considerations, transparency, and accountability in AI development and deployment are crucial in mitigating the risks associated with its advancement.

--

--