#62: 4.5 billion GPT-3 words a day, Moore’s Law for Everything, and Nothing Breaks like an AI heart

Leon Overweel
Dynamically Typed
Published in
5 min readMar 28, 2021

Hey everyone, welcome to Dynamically Typed #62. After last issue’s essay on the climate opportunity of gargantuan AI models (check it out if you haven’t yet!), I’ve got lots of different links for you again this week. Across productized AI, ML research and cool stuff, I’ve got one story for each that relates to OpenAI’s GPT-3 language model. Beyond that, I found an interesting AI economics blog post by Sam Altman; some updates on Google’s AI ethics crisis; news around self-driving heavy-duty trucks; and MyHeritage’s Deep Nostalgia photo animation tool.

Productized Artificial Intelligence 🔌

  • 📱 After we saw GPT-3 — OpenAI’s gargantuan language model that doesn’t need finetuning — used for lots of cool demos, the model’s API now powers 300+ apps and outputs an average of 4.5 billion (!) words per day. OpenAI published a blog post describing some of these apps, including Viable, which summarizes and answers questions about survey responses, and Agolia, a website plugin for semantically searching through content. Cool stuff! As the OpenAI API scales up to power more products, though, one thing to keep a close eye on will be how often it outputs problematic responses in production systems. Abid et al. (2021) have shown that GPT-3 has a persistent anti-Muslim bias, and TNW’s Tristan Greene got a GPT-3-powered chatbot to spit out racist and anti-LGBT slurs. The OpenAI API runs a content filter on top of the raw GPT-3 model to prevent such responses from reaching end-users (which is pretty strict in my experience: when I was playing around with the beta, I couldn’t get it to say bad things without labeling them as potentially problematic) but no filter is ever perfect. We’ll see what happens in the coming few years, but I do expect that the good and useful products will outweigh the occasional bad response.
  • 💰 OpenAI CEO Sam Altman wrote Moore’s Law for Everything, an essay in which he discusses economic implications of the exponential rate at which AI is improving. As AI replaces more labor and makes goods and services cheaper, he argues that we must shift the focus of taxation away from income and toward capital, to prevent extreme inequality from destabilizing democracies. See his full essay for details of the (US-specific) implementation, in the form of an American Equity Fund and broad land taxes. This reminds me of a discussion we had in my undergrad CS ethics class on “taxing robots” because they replace labor (and taxable income with it). At the time, I argued against this idea because it seems impossible to implement in any sane way — should we tax email (which is free!) because there are no more telegram operator jobs left? Altman’s proposal is a different solution to this same problem, and a pretty interesting one at that — right up there with a Universal Basic Income (UBI).
  • 🚛 Last September, I wrote that autonomous trucks will be the first big self-driving market. A detailed new report by Deloitte’s Rasheq Zarif now came to the same conclusion: Autonomous trucks lead the way. “Driverless trucks are already heading out to the highway, as shipping companies increasingly look to autonomous technology to meet rising demand for goods. The focus now: determining the best way to hand off trailers from machine to human.” In related news, self-driving car company Waymo, which has been developing autonomous heavy-duty trucks since 2017, invited a few journalists along for a (virtual) test ride. Exciting few years ahead here.
  • 🎞 This was making the rounds on Twitter: online genealogy platform MyHeritage launched a tool called Deep Nostalgia that animates faces in old family photos. According to the company’s blog, it was used to animate over 10 million faces in its first week of being live. As with many visual deep-learning-powered features, there is a free version with watermarks, as well as a premium version as part of a paid MyHeritage subscription. The model behind Deep Nostalgia is licensed from D-ID, a startup that makes live portrait, talking heads, and video anonymization products.

Machine Learning Research 🎛

  • 🏢 I wrote about Google’s AI ethics crisis last December, when the company pushed out their Ethical Artificial Intelligence Team’s co-lead Timnit Gebru after a series of conflicts around a critical paper she was working on. Her dismissal was not received well by her team and the community at large. A few months later, Google also fired Margaret Mitchell, the team’s other co-lead. And now it seems that the dust has settled a bit, internally at least: according to a post on Google’s The Keyword blog, Dr. Marian Croak, long-time VP at the company, “has created and will lead a new center of expertise on responsible AI within Google Research.” I wonder how much of Gebru’s and Mitchell’s original team is sticking around for this new group — the researchers who spoke out publicly did not seem to have much faith left in their ability to work on ethical AI issues from inside Google.
  • 💬 EleutherAI, “a grassroots collection of researchers working to open source AI research,” has scaled its open-source GPT-Neo implementation up to GPT-3 size. The weights are available to download for free, and you can play around with the pretrained models in an example Colab notebook. Yay open science! Now that I’ve run out of free GPT-3 credits on the OpenAI API, maybe I’ll be able to use this to generate new content for This Episode Does Not Exist! — drop me a message if you’d like me to try it out for your favorite podcast.

Cool Things ✨

A section of Nothing Breaks Like A.I. Heart
  • 💔 For The Pudding and together with GPT-3, OpenAI engineer Pamela Mishkin wrote Nothing Breaks Like A.I. Heart, “an essay about artificial intelligence, emotional intelligence, and finding an ending.” It’s a mix of sentences written by Mishkin and ones generated by GPT-3, and it has interactive elements that allow you to click through different completions, to tweak parts of the story to your liking. At some points, you can even “pivot” it to different branches of where the story could go. It’s a lovely, very Pudding-like project, that also explains a lot of the limitations of language models along the way — worth a click!

Thanks for reading! If you enjoyed this issue of Dynamically Typed, consider subscribing to get a new issue delivered straight to your inbox every second Sunday.

Originally published March 28h, 2021, at https://dynamicallytyped.com.

--

--

Leon Overweel
Dynamically Typed

Incoming deep learning engineer @PlumeraiHQ , writing http://dynamicallytyped.com | 🚲 🚣‍♂️ 🇳🇱 🏳️‍🌈