Who benefits from creating a panic about machine learning and the need for regulation?

Enrique Dans
Enrique Dans
Published in
3 min readJun 25, 2023

--

IMAGE: The head and chest of an evil looking robot
IMAGE: Juan Agustín Correa Torrealba — Pixabay

More and more of us see the narrative about the supposed dangers of machine learning and generative algorithms as a way of creating alarm among the public and thus influencing governments to regulate in favor of Big Tech, which presents itself as the gatekeeper for these new technologies, and with our best interests at heart.

Big Tech knows that politicians respond to the public’s fears: if they can get people to implore regulators to protect them from self-aware machines taking over the world, a narrative that science fiction has explored for decades.

We find ourselves, once again, being told about the dangers of technology, this time driven by viral-hungry social networks, with the aim of pressuring politicians to protect us, not from those who made up this story, who alerted us to those dangers and are therefore presumed responsible, but against “others” who may use that technology against us.

What’s really going on here is that companies like OpenAI are applying the classic Silicon Valley startup strategy of using strong initial leverage to change the scale of machine learning projects, now need to monetize their development by wooing, or wowing, the markets. For these companies and their investors, the danger is the emergence of open source

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)