Member-only story
TECHNOLOGY
Three Misconceptions about A.I. and Consciousness
How pre-modern ideas inform our thinking about intelligent machines
Recently, one of the co-founders of OpenAI claimed that “predict[ing] the next token” in a string of text “means that you understand the underlying reality that led to the creation of that token.”
This is false. Even for human beings, the ability to intelligently converse about an underlying reality doesn’t imply that we understand it, as the whole history of disproved scientific theories shows. But it’s the kind of reductive hot take that’s come to characterize so much of Silicon Valley’s thinking about A.I. As Upton Sinclair once wrote, “it’s difficult to get a man to understand something, when his salary depends on his not understanding it.”
Fortunately, my salary doesn’t depend on my beliefs about A.I.’s capabilities. But I suppose to some degree it does depend on whether people understand and trust A.I. enough to want to use it and use it responsibly. So to that end, let’s take a look at three misconceptions that I think a lot of people have about A.I., and whether its understanding of the problems we’ve designed it to solve, ultimately translates into something like what we mean by “consciousness.”