Amy Webb: “Human DNA is in the DNA of our AI systems”
As AI increasingly finds its way into a variety of life and work environments, GEN decided to speak with Amy Webb, founder of the Future Today Institute and author of ‘The Big Nine — How the Tech Titans and Their Thinking Machines Could Warp Humanity’. Amy shares her views on where the development is heading, who the main players are, and what are the misconceptions in the industry.
American futurist Amy Webb, an expert in both AI and finding future tech trends, shares her thoughts on why it is important to remember that humans are in charge of AI’s development and use: “We are literally entangled with it, because it is our data that are being used to train AI systems, to build future applications, and to make millions of decisions on our behalf, both small and significant.” That being said, we shall not fear the robots, but be wary of the people in charge of them.
GEN: You just released a new book, ‘The Big Nine — How the Tech Titans and Their Thinking Machines Could Warp Humanity’ and you state that: ‘The Big Nine aren’t the villains in this story. In fact, they are our best hope for our future.’ Can you elaborate on what you mean by that?
Amy Webb: Fundamentally, I believe that AI is a positive force, one that will elevate the next generations of humankind and help us to achieve our most idealistic visions of the future. But I’m a pragmatist. We all know that even the best-intentioned people can inadvertently cause great harm. Within technology, and especially when it comes to AI, we must continually remember to plan for both intended use and unintended misuse. And at the moment, the big tech companies — not our governments — are the ones researching and building our futures. It’s their AI frameworks, their clouds, their devices, their services, and their products that we now all rely on. I don’t believe that the Big Nine are necessarily villains. I believe they are our best hope for the future.
If ‘The Big Nine’ and therefore a small group of companies control the future of AI, what are some of the potential negative scenarios you see? In history, did you see such a concentration of power regarding the future of our societies?
The real future of AI is hard to see without dedicating time and effort to learning more about what it is, what it isn’t, and how AI relates to human life. It’s dangerous to pre-assign utopian or apocalyptic narratives to what AI will become —that would assume that we have no say or agency in our futures.
Humanity is facing an existential crisis in a very literal sense, because no one is addressing a simple question that has been fundamental to AI since its very inception: What happens to society when we transfer power to a system built by a small group of people that is designed to make decisions for everyone? What happens when those decisions are biased toward market forces or an ambitious political party?
The answer is reflected in the future opportunities we have, the ways in which we are denied access, the social conventions within our societies, the rules by which our economies operate, and even the way we relate to other people.
Do you think that a so-called democratisation of AI is necessary and to what degree would a democratisation be possible given the monopolistic tendencies and competitive nature of the tech market where the majority of development is happening?
Yes, I do. It is possible if we all made decisions to pursue a different developmental track, which means accepting short-term sacrifices in the name of longer-term gains.
In your book, you also talk about the U.S. and China moving in very different directions when it comes to AI development. Where are the differences and what are some of the future implications of this disparity for the industry?
In China, AI’s developmental track is tethered to the grand ambitions of government. Baidu, Tencent, and Alibaba may be publicly traded giants, but typical of all large Chinese companies, they must bend to the will of Beijing.
We must consider the developmental track of AI within the broader context of China’s grand plans for the future. AI is part of a series of national edicts and laws that aim to control all information generated within China and to monitor the data of its residents as well as the citizens of its various strategic partners. These policies and initiatives are brainchildren of President Xi Jinping’s inner circle, which for the past decade has been singularly focused on rebranding and rebuilding China into our predominant global superpower. China is more authoritarian today than under any previous leaders since Chairman Mao Zedong, and advancing and leveraging AI are fundamental to the cause.
Xi’s endgame is abundantly clear: to create a new world order in which China is the de facto leader. And yet, during this time of Chinese diplomatic expansion, the United States turned its back on longstanding global alliances and agreements as President Trump erected a new bamboo curtain.
AI isn’t a trendy technology. We are literally entangled with it, because it is our data that are being used to train AI systems, to build future applications, and to make millions of decisions on our behalf, both small and significant. Therefore, we cannot treat AI as though it were just any other technology. We are asking machines to make decisions and choices for us, and we are expected to accept the outcome of those decisions and choices, whether they are in the realm of banking, medical care, transportation, national defense, policing,… the list could go on. That list includes the news media. I worry that news organisation leaders just aren’t prepared for what is unfolding.
Where do you see worrisome trends in regards to AI regulations and development when looking at tech giants as well as countries today?
At the moment, my biggest fear is a Balkanization of AI. I worry that different countries and even cities will pass their own local regulations governing AI, which will create countless problems without actually solving for privacy, transparency and workforce concerns.
At the World Economic Forum’s 2019 conference in Davos, you argued that the exaggerated fears and optimism around the technology come from ignoring the fact that humans are in charge of its development and use. Why is it important to remember the human factor in AI technology?
When it comes to artificial intelligence, we have a lot of misplaced optimism and fear, and that’s the result of a serious misunderstanding fed by decades of exceptional storytelling in books, movies, and TV shows — like Skynet in the Terminator. It’s important to remember that we all have cognitive biases. We’ve been living with the idea of AI for so long that we are missing much of the most important developments happening right now in the present. Human DNA is in the DNA of our AI systems.
As different countries and regions chose different approaches on how to go about implementing new technology like AI into daily society, how important is it to agree on best practices and regulations as a global community?
In the book, I propose a Global Alliance on Intelligence Augmentation, or GAIA. The international body would include AI researchers, sociologists, economists, game theorists, futurists, and political scientists from all member countries. GAIA members would reflect socioeconomic, gender, race, religious, political, and sexual diversity. They would agree to facilitate and cooperate on shared AI initiatives and policies, and over time exert enough influence and control that an apocalypse is prevented.
It may seem impossible to unite the governments of the world around a central cause given the political rancor and geopolitical uneasiness we’ve experienced in the past few years. But there is a precedent. In the aftermath of World War II, when tensions were still high, hundreds of delegates from all Allied nations gathered together in Bretton Woods, New Hampshire, to build the financial structures that enabled the global economy to move forward. That collaboration was human-centered — it resulted in a future where people and nations could rebuild and seek out prosperity.
GAIA nations should collaborate on frameworks, standards, and best practices for AI. While it is unlikely that China would join, an invitation should be extended for CCP leaders and for the BAT to join. I realise this is a big ask.
With AI becoming more prevalent across industries, how can newsrooms and publishers integrate artificial intelligence into the newsroom in a sustainable way? Are they condemned to be dependent on the big nine?
Well, newsrooms are already wholly dependent on the G-MAFIA. Newsrooms leaders must redefine their unique value propositions in the age of AI. For example, once people are talking to machines rather than simply typing on them, how does that shift search? How does that impact the business model? Leaders must get used to confronting deep uncertainty and finding continuous actions to take.
Given your background in journalism, what are, in your opinion, the key technical skills to acquire for journalists, aiming to familiarise themselves with computational journalism techniques?
I’ve been getting this question a lot for the past 15 years. First it was in response to the internet, then mobile, then data. Now it’s AI. Key skills would include having some familiarity with data science, understanding core business principles, and most importantly being flexible and agile.
Besides the ever decreasing attention span, have you noticed other changes in user behavior regarding media consumption during your research? Especially ones that newsrooms should be more aware of?
News organisations don’t control distribution. Which means they must compete with attention across all areas of content and entertainment. News organisations are now competing with e-sports, YouTube stars, and the latest mobile games. Leaders must define “media” much more broadly and think exponentially about the future of attention.
Regarding this year’s Tech Trends Report, what would you say are the key trends to watch out for from a newsroom’s perspective?
There are lots: automated distribution, generative content, home automation, privacy and transparency, and even adjacently-related areas like transportation and genomics.
Amy Webb is a keynote speaker at this year’s GEN Summit in Athens, Greece, from 13 to 15 June, sharing her insights on the latest tech and business trends for newsrooms.
Amy Webb is a professor of strategic foresight at the NYU Stern School of Business and the Founder of the Future Today Institute, a leading foresight and strategy firm. Now in its second decade, the Future Today Institute helps leaders and their organizations prepare for deep uncertainties and complex futures. She is the author of the books The Signals Are Talking: Why Today’s Fringe Is Tomorrow’s Mainstream and The Big Nine. How the Tech Titans and Their Thinking Machines Could Warp Humanity.