AI Top-of-Mind for Nov 21

dave ginsburg
AI.society
Published in
2 min readNov 21, 2023

Pour your coffee and dive in….

I don’t need to rehash the ongoing intrigue at OpenAI, but the employee letter to the board is worth reading, as well as an interview from a week ago with Altman in the NY Times ‘Hard Fork’ podcast.

The letter, with the 702 employee names removed:

Source: OpenAI

And from the podcast:

Kevin Roose: Yeah. So it stands for artificial general intelligence. And you could probably ask a hundred different A.I. researchers, and they would give you a hundred different definitions. Researchers at Google DeepMind just released a paper this monththat sort of offers a framework. They have five levels, ranging from zero, which is no A.I. at all, all the way up to Level 5, which is superhuman. And they suggest that currently ChatGPT, Bard, LLaMA are all at Level 1, which is sort of equal to, or slightly better than, an unskilled human. Would you agree with that?

Sam Altman: I think the thing that matters is the curve and the rate of progress. And there’s not going to be some milestone that we all agree, like, OK, we’ve passed it and now it’s called AGI. I think most of the world just cares whether this thing is useful to them or not. And we currently have systems that are somewhat useful, clearly. And whether we want to say it’s a Level 1 or 2, I don’t know.

Turning the channel, a very recent on-demand webinar from IDC on AI and automation predictions for 2024. One screenshot worth digesting:

Source: IDC

And moving to marketing, more on how the technology is quickly integrating itself into many aspects of demand generation. A recent study published in ‘MediaPost’ looks at uses, omni-channels, challenges, and ethics.

Source: MediaPost

Finally, on policy, an analysis in ‘Digiday’ of the recently proposed ‘Artificial Intelligence Research, Innovation, and Accountability Act of 2023.’ More on this act later.

“Artificial intelligence comes with the potential for great benefits, but also serious risks, and our laws need to keep up,” Klobuchar said in a statement. “This bipartisan legislation is one important step of many necessary towards addressing potential harms.”

None too soon given ongoing identification of flaws in many AI/ML tools, as reported by ‘Security Week.’

--

--

dave ginsburg
AI.society

Lifelong technophile and author with background in networking, security, the cloud, IIoT, and AI. Father. Winemaker. Husband of @mariehattar.