Why do we always see new technology as a threat?

Enrique Dans
Enrique Dans
Published in
3 min readApr 13, 2023

--

IMAGE: A cog with the word “regulation” and a finger pointing at it
IMAGE: Gerd Altmann — Pixabay

Once again, we are hearing calls for regulation of a new technology; in this case, large language models and generative algorithms, a response that is rooted in a very basic fear of change.

Draft European legislation and Italy’s decision to ban access to ChatGPT on privacy grounds, are matched in the United States for a public consultation to shape possible regulation of such algorithms; now Beijing , in its usual “command-and-control” style, is forcing algorithm developers to submit security audits certifying that the content generated by their tools is correct, does not use copyrighted materials, is not discriminatory and does not pose a security threat.

The main problem with this kind of regulation is, firstly, that it is usually carried out by politicians who little idea of what they are regulating, and who are driven by models of social alarm based on exactly the same lack of knowledge. In the case of LLMs, we are talking about a discipline, machine learning, which has been progressing for decades at the pace of the availability of data processing technology, and which has simply, on this occasion, been able to multiply its scale to work with language models based on billions of parameters. The result is that everyone who has worked on machine learning so far is amazed at the impact of dumb algorithms that have no idea what they

--

--

Enrique Dans
Enrique Dans

Professor of Innovation at IE Business School and blogger (in English here and in Spanish at enriquedans.com)