I’m Binge Watching Interviews with Sam Altman…

…so you don’t have to… and here are my takeaways. (Article 8)

Drew Wolin
6 min readMar 30, 2023

This is a continuation from a series of articles found HERE.

2023-Lex Fridman Interviews Sam Altman (60:00 to 90:00)

Context:

The interview covered in this article was just released a few days ago. And it’s a doozy — over two hours. So I broke it up into 30 minute chunks, with one article per 30 minute chunk.

As always, I’ll pull out interesting tidbits below, adding in my personal commentary or relevant context sparingly.

Unless otherwise stated, all quotes belong to Sam Altman.

60:00 to 90:00

Editor’s Note: We are now one hour into the interview, and Sam starts kind of interviewing Lex for a period of time. This is funny.

Sam thinks that slow takeoff with short timelines is the best way to develop AGI. Meaning, start the takeoff now, but go slowly.

Sam says that he is afraid of the fast takeoff for A.I.

“I think that GPT4, while quite impressive, is not an AGI… I think we’re getting into the phase where specific definitions of AGI really matter. Or we just say ‘I know it when I see it.’ But under that pretense, GPT4 doesn’t feel that close to an AGI to me.”

“If I was reading a sci-fi book, and in that book GPT4 was the AGI, I’d be like ‘Oh, this is a shitty book.’”

“I think it’s weird when people think it’s a big dunk when they hear me say that I’m a little bit afraid (of A.I.).”

Sam thinks it’s unreasonable to not be afraid of A.I., basically.

“I think there’s going to be many AGIs in the world, so we don’t have to outcompete them necessarily.”

Sam thinks that having a variety of AGIs available in the world, optimized for different things, is good.

Sam says that in 2015 when they first announced OpenAI to work on AGI, they people thought that they were “batshit insane.”

Same says there was significant pettiness in the field toward them.

Editor’s Note: Sam is taking his own advice here. In previous interviews, Sam talked about the importance of being ok with being wrong for years, until you are eventually proven right. He seems to be saying that OpenAI was started with significant pushback and lots of naysayers. But now he is getting proven right (so far).

Sam talks about why OpenAI went from nonprofit to for-profit (but capped profit, which is a unique set up).

“We started as a non-profit. We learned early on that we were going to need far more capital than we would be able to raise as a non-profit. Our non-profit is still fully in charge (of our direction and decisions). There is a subsidiary capped profit so that our investors and employees can earn a certain fixed return. Beyond that, everything flows to the non-profit. The non-profit is in voting control.”

“To do what we needed to go do, we had tried and failed enough to raise the money as a non-profit. We didn’t see a path forward there.”

“We needed some of the benefits of capitalism, but not too much.”

Sam remembers someone advising that as a non-profit, not enough would happen. But as a fully for-profit entity, too much could happen. “So we needed this strange intermediary.”

“We can’t control what other people are going to do… We can try to influence others… But (other big tech companies) are going to do what they’re going to do (in relation to A.I.)”

Sam says that nobody wants to destroy the world.

Sam asks Lex: “Do you think we should open source GPT4?”

The conversation that follows is about how OpenAI has good people, and so it’s possible to trust the people at OpenAI to develop an A.I. that is closed.

Sam says that he gets personal threats “all the time” for putting out A.I. already.

“I really would love any feedback on how to do better… Talking to smart people is how we get better.”

“My Twitter is unreadable.”

Sam says that he and Elon Musk agree on the potential downside of A.I. and the need to get safety right. Sam says they both want people to be better off after A.I. is built than before it was built.

Sam says that Elon is understandably really stressed about A.I. safety, which is part of what is leading him to attack OpenAI on Twitter.

“I saw this video of Elon a long time ago, talking about SpaceX. And that a lot of early pioneers in space were really bashing SpaceX, and maybe Elon too. And he was visibly very hurt by that. And he said that those guys are heroes of his, and this sucks, and I wish they would see how hard we are trying. I definitely grew up with Elon as a hero of mine. And despite him being a jerk on Twitter, I’m happy he exists in the world.”

Editor’s Note: Wow, what a human moment!

For reference, here is the video that Sam is referencing:

Sam gives Elon his flowers for getting us to space faster than if he didn’t exist. “As a citizen of the world, I am very appreciative for that.”

“Being a jerk on Twitter aside, in many instances, Elon is a very funny and warm guy.”

Sam says that “hitting back” is not his normal style.

Lex asks Sam if GPT is too woke.

Sam says that he doesn’t know what woke means.

Sam says that there will never be a version of GPT that the whole world feels is unbiased.

“I appreciate critics who display intellectual honesty.”

“We will try to get the default version (of GPT) to be as neutral as possible. But as neutral as possible is still not that neutral.”

Sam again mentions that steer-ability and ability to customize to your own preferences is the future of AGI.

Lex asks if biases of employees can impact the system.

Sam answers decisively: “100%.”

Sam says that when making AGI, OpenAI is cognizant of avoiding the “SF groupthink bubble” type thinking.

“The bias I’m most nervous about is the bias of the human raters.”

Sam says that OpenAI is trying to figure out how to choose human raters to help train A.I. Sam says that OpenAI is great at the pre-trained machinery.

“You clearly don’t want all American university students giving you your labels.”

My Biggest Takeaway from this Interview

The beginning of this segment wasn’t that interesting. They mostly debated what consciousness is, and what it takes to call A.I. “conscious.”

Meh.

But what comes next about profit-structure of OpenAI, commentary on Elon “being a jerk” on Twitter (Sam says it twice!) and so on was great. Meaningful.

This quote about their business model was great:

“We started as a non-profit. We learned early on that we were going to need far more capital than we would be able to raise as a non-profit. Our non-profit is still fully in charge (of our direction and decisions). There is a subsidiary capped profit so that our investors and employees can earn a certain fixed return. Beyond that, everything flows to the non-profit. The non-profit is in voting control.”

CLIP FROM THE ELON INTERVIEW SAM REFERENCED

More to come!

Articles in Series:

  1. https://medium.com/@dwolin/im-binge-watching-interviews-with-sam-altman-29a1f9f07ee1
  2. https://medium.com/@dwolin/im-binge-watching-interviews-with-sam-altman-559bea849356
  3. https://medium.com/@dwolin/im-binge-watching-interviews-with-sam-altman-44638f1e4eff
  4. https://medium.com/@dwolin/im-binge-watching-interviews-with-sam-altman-e1d8ac81ca43
  5. https://medium.com/@dwolin/im-binge-watching-interviews-with-sam-altman-588981e6eb2b
  6. https://medium.com/@dwolin/im-binge-watching-interviews-with-sam-altman-5ebfe3c79f7e
  7. https://medium.com/@dwolin/im-binge-watching-interviews-with-sam-altman-710bc1447a7c
  8. ← You are here

--

--

Drew Wolin

Scout and Analyst, NBADraft.net | Freelance Basketball Writer | Full Time Data and Business Analyst