On NYT Magazine on AI: Resist the Urge to be Impressed

Context

tl;dr

  • Yes, there is an urgent need to address the harm being done by so-called “AI” and to set up effective regulation and governance so that those who are impacted by this technology have power over how it is deployed.
  • But no, the harms aren’t going to come from autonomous “AI” that just hasn’t been taught appropriate values.
  • And no, the solution isn’t to try to build “AI” (or “AGI”) faster or “outside” the megacorps. (Scare quotes on “outside” there, because OpenAI isn’t really as independent as they claim — given both the source of their initial funding and their deal with Microsoft.)
  • What’s needed is not something out of science fiction — it’s regulation, empowerment of ordinary people and empowerment of workers.
  • Puff pieces that fawn over what Silicon Valley techbros have done, with amassed capital and computing power, are not helping us get any closer to solutions to problems created by the deployment of so-called “AI”. On the contrary, they make it harder by refocusing attention on strawman problems.
  • If you’d like to learn more about what is going on and what shape meaningful solutions could take, I recommend authors such as Safiya Noble, Meredith Broussard, Ruha Benjamin, Shoshana Zuboff, Abeba Birhane, Joy Buolamwini and her colleagues at the Algorithmic Justice League, and journalists such as Khari Johnson, Edward Ongweso Jr, and Karen Hao (see especially this piece on OpenAI).

On asking the right questions

  • Why are people so quick to be impressed by the output of large language models (LLMs)? (Nota bene: This is not a new observation. It goes back at least to the way people reacted to Eliza, see Weizenbaum 1976)
  • In what ways are corporations leveraging that credulousness, on the part of users, investors, regulators?
  • Where is this technology being deployed, and what are the potential consequences? Who is bearing the brunt? (We — Timnit Gebru, Angelina McMillan-Major, Meg Mitchell, our further co-authors and I — talked about various kinds of potential harm in the Stochastic Parrots 🦜 paper, but I would be very interested in journalistic work on actual deployments.)
  • What would effective regulation look like in this space? Who is working on that regulation?
  • How is OpenAI shaping the conversation around so-called “AI”, as developed by them or others?
  • How does OpenAI’s rhetoric around “artificial general intelligence” shape public and regulator understanding of claims of other companies, such as those who purport to “predict” recidivism risk, “recognize” emotion, or “diagnose” mental health conditions?
  • What are the relationships between OpenAI’s staff/board members/founders and other organizations?
  • What are the financial incentives at play, and whose interests do they represent (noting OpenAI’s “exclusive computing partnership” with Microsoft)?

Buying into the hype

Tweet from Ilya Sutskever on Feb 9, 2022 reading “it may be that today’s large neural networks are slightly conscious”

All that glitters is not gold

OpenAI hagiography

LLMs aren’t inevitable and the “wider web” isn’t representative

Not just an academic debate

On being placed into the “skeptics” box

Conclusion

A photo of a macaw, close up on its face in profile.
Source: https://www.maxpixel.net/Yellow-Red-Green-Hybrid-Macaw-Bird-Orange-Parrot-943228
  • Just because that text seems coherent doesn’t mean the model behind it has understood anything or is trustworthy
  • Just because that answer was correct doesn’t mean the next one will be
  • When a computer seems to “speak our language”, we’re actually the ones doing all of the work

Acknowledgments

--

--

--

Professor, Linguistics, University of Washington// Faculty Director, Professional MS Program in Computational Linguistics (CLMS) faculty.washington.edu/ebender

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

From deep learning down

Top 10 Important SaaS Trends You Should Watch Out For In 2022

Kick Start with Machine Learning Platform for AI in Alibaba Cloud — Part 2

Vision Based Motion Control for Mobile Robots

Totally Turing: Developing a Practical, Human-Centric Model for Operationalizing Artificial…

The Synthetic Language for Live Performance of Kat Mustatea

IBM Watson OpenScale Wins an AI Excellence Award

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Emily M. Bender

Emily M. Bender

Professor, Linguistics, University of Washington// Faculty Director, Professional MS Program in Computational Linguistics (CLMS) faculty.washington.edu/ebender

More from Medium

Twitter After Elon

Elon Musk’s April 14, 2022 tweet about making an offer to buy Twitter

I Work as a Technical Director at Pixar

A frame from Turning Red where the main character as a giant red panda, Mei, hugs her three friends.

Technology Taught Us To Never Wait

Elon Musk’s Vision for Twitter Should Scare the Hell Out of You