Harnessing AI to Transform Society

Economies and corporations can use new technology to improve pluralism, advance education, and create a more diverse workforce

CEO Summit Journal
CEO Summit Journal
9 min readNov 9, 2023

--

By Cristina Aby-Azar | Global Gateway Advisors

Glen Weyl, an economist, chair of the Plurality Institute, and founder of the RadicalxChange Foundation, spoke to the CEO Summit Journal about the threats and opportunities posed by the proliferation of artificial intelligence. The concentration of power gained by models such as OpenAI is unprecedented and potentially dangerous, he says. However, moments of threat have led to some of the most important advancements in society, such as the creation of the United Nations after World War II.

“We can’t get through all of this by defending our existing systems. We can only do it by harnessing these technologies to innovate a future with far more advanced, far more rich, far more diverse, far more powerful and interesting forms of democratic participation in pluralism than currently exists,” Weyl said. He believes education systems must be adjusted to prepare a new workforce where the most valuable skills involve cultural intersections and personal passions.

Below is an excerpt from our conversation, edited for length and clarity.

Photo courtesy of Christopher Kulendran Thomas and Laura Weyl

CRISTINA ABY-AZAR: In a book you co-authored with Audrey Tang, you talk about the risks AI poses to democracy. You talk about “anti-social” and “centralizing” threats. Can you elaborate on these threats?

GLEN WEYL: A useful way to think about this is a framework proposed by economists Daron Acemoglu and James A. Robinson. They talk about a narrow corridor in which free societies can exist between chaos and authoritarianism. The real question is what does technology do to the width of that corridor?

What we’ve seen recently is a very challenging time trying to avoid one of those threats or the other. On the one hand, you have the development of large models like OpenAI, philanthropic funding, et cetera, with historically unprecedented concentration of power.

OpenAI had a total of about 200 to 400 employees. Only 100 of those in the core engineering team of a product that arguably is more important than what came out of the Manhattan Project [the U.S. government research project that produced the first atomic bombs during World War II] that had tens of thousands of engineers and more than 100,000 employees, despite being a top government secret.

Attitudes [today] are being determined by this tiny set of people, and that is very concerning from a legitimacy perspective. It’s concerning from a democracy perspective; it’s concerning from a centralization of power perspective.

The ability of these models to potentially undermine the foundations of social order is really remarkable. They are increasingly passing the Turing test, created in 1950 to test a machine’s ability to show intelligent behavior equivalent to that of a human. That means that we’re going to have digital systems that can undermine pretty much all foundations of trust and authentication that we rely on. They can already create Zoom or Teams filters that can turn me into Sam Altman or Satya Nadella, that can interact realistically in their tone of voice, in their style of speaking. These models are going to be able to produce interactive tailored misinformation. They are going to be able to put in the hands of pretty much everyone around the world a Sherlock Holmes, an agent, who can look at all the public signals available about you and determine what’s most likely your password, what’s most likely the answer to your security questions. We knew that [certain] agencies probably could do that in the past, but now [the data] is going to be distributed into the hands of people around the world, unless you have very centralized controls and checks on who has access to these technologies.

This is the sort of Scylla and Charybdis we are facing. On the one hand there’s incredibly tight and largely democratically illegitimate control and development of these technologies. On the other hand, as you start to relax those things, as you start to create these open-source models that more people have access to and can customize, you could undermine so many of the foundations of trust, social capital and democracy that exist today. The real question is, how can we simultaneously avoid these scenarios?

CAA: So, what is your road map?

GW: It is usually in the moments of greatest threat that democratic systems make their greatest advances. The [American] Civil War brought about arguably the world’s first multiracial democracy. World War II created the United Nations and the greatest decolonization efforts in history. So, in my view, we can’t get through all of this by defending our existing systems. We can only do it by harnessing these technologies to innovate a future with far more advanced, far more rich, far more diverse, far more powerful and interesting forms of democratic participation in pluralism than currently exists. And, what we outlined in the book is a strategy for doing that. There are all these risks, but there’s also a chance of creating democracy that we’ve never imagined before. Audrey Tang is already making this a reality.

CAA: Can you give us an example of what Tang is doing that others should be following?

GW: I’ll give you one very simple illustration, and in fact, it’s very primitive. But that is what’s so cool, how much it can be achieved even with a very primitive version of these technologies. [The government] has an online-offline consultation process that operates a bit like X [the platform formerly known as Twitter], on which people give short responses to prompts about some policy question. But rather than serving up to people the content that they’re likely to find most interesting or engaging, it gives them a sense of the overall dynamics of the conversation, the different opinions in the conversation. And, what bridges those different perspectives. So, what do I mean by bridging? In any political discussion there are going be differences of opinion. That’s the essence of politics, right?

Those differences define opinion groups on political issues. There might be Republicans and Democrats and Libertarians or whatever issues within a company. There might be different divisions of the company that have different perspectives on the issue. These different groups will tend to have clustered perspectives on the topics being discussed. What the system does is identify what those clusters are, and then show what are the things that people agree with across those social differences, and what are the things that divide them. So, it gives people a sense of which groups they might work with, if they want to advance some cause that not everyone agrees on, but also what are the points on which there is already general agreement and on which they can move forward in the future. This has been a very effective system that has helped officials to reach resolutions on a range of policy issues, from the regulation of Uber to gay marriage.

One quarter of the country’s population are active monthly users [of the platform]. If you think of social media, everyone knows there’s a lot of heat there. But the important thing to remember is that heat is generated from energy. And, if you have an engine rather than just noise, you can turn that energy into work. You can turn it into things that solve public problems rather than into heat.

CAA: Are there other examples of organizations using a similar model?

GW: Absolutely, and, by the way, there’s nothing all that mysterious about this. This is the kind of perspective that built Wikipedia, which all of us rely on every day. This is the kind of perspective that is the foundation of the open-source software movement on which most of the tools that we rely on today depends. We have examples of this kind of thing working all over the place. The problem is that public sectors and corporations haven’t leaned into supporting them.

On GitHub [the platform and cloud-based service for software development], for example, there are hundreds of millions of people donating their time to produce secure technology for the public interest that supports people around the world in the open-source community. Some governments are involved a little bit, but if you think about how public money and philanthropic money is spent, it is not to support these digital public goods that people are offering around the world.

CAA: Shifting to the impact of AI in the labor market, which jobs are going to be most affected by the widespread use of these new technologies? Who will be the winners and losers?

GW: Immediately, the most affected occupations are going to be the ones that are overwhelmingly about document analysis, communication,
and coding. A recent OpenAI report mentioned translators, mathematicians, computer engineers. The interesting thing is that it’s a very different slice of the population than those that have been affected by previous waves of automation.

I think that white collar workers are really going to feel the challenges of what’s happening. Technical skills that have been so prized are very likely to become some of the things best achieved by these machines. On the other hand, my guess is that a lot of softer intercultural creative skills are going to be more robust. People with creative and culturally sophisticated perspectives, who are very comfortable fully harnessing these tools to amplify their capabilities, will do better.

I think it’s going to be a very different moment in terms of skills than the moment that we’ve been going through since the advent of the personal computer in the late 1970s, early 1980s. And I think that it is going to be quite interesting. At the same time, at least on the current trajectory, it’s quite likely that it’s going to increase the capital share of income and reduce the labor share of income. Hopefully we can avoid that outcome.

CAA: Does it mean that education systems must be adjusted to this new reality?

GW: I think we need to have a much more sophisticated system of educational credentialing. Right now, everything is based on the Carnegie unit. This is where you spend 120 hours with an instructor, and you get a letter grade. We use this system because it is too hard to keep track of and evaluate people’s credentials based on something that’s actually true to educational psychology. AI is going to change this. It allows us to have a much finer grained understanding of what someone has achieved, what their backgrounds are, what they bring to an occupation.

We are going to need to adapt educational systems to give people the ability to acquire these micro credentials and to understand what packages of those things are likely to make them a useful contributor in this very dynamic environment.

So, how do we keep focus on active access and outcomes in this situation? I mean, suddenly you have to reshift education, it’s a long process. In the meantime, there’s this gap period. How do we keep people moving ahead, improving their livelihood and staying in the job market?

Well, many of the skills that are going to be most valuable are going to be the unique cultural intersection that people have that is not going
to come only from their workplace. It’s going to come from their passions. It’s going to come from the way that they’re connecting with different people from different places.

So many of these unique things that people can add are going to come from their social lives and from their organic connections and interests. I don’t think it all has to be driven by a top-down mandate. And I don’t think either that the adoption of these technologies is going to be so incredibly rapid that we are going to see massive job displacement in the very near term. It will happen over a period of time and will happen primarily to people who have some capital built up and a lot of general education behind them.

In that sense, while of course there will be many disruptions and there are many things to do, I don’t think that we are going to have a near term crisis of sort of desperate, unemployed, lower-income people. A lot of the more service oriented and physical presence driven roles are going to be only minorly affected by what is about to happen.

--

--

CEO Summit Journal
CEO Summit Journal

CEO Summit Journal is a hub of news + views on business, trade and politics. Currently covering the APEC CEO Summit USA 2023 (Nov. 14-16 in San Francisco).