What Plato has taught me about artificial intelligence
This is part of a series on my quest to learn as much as possible about AI. To know why I’m doing this, check out my first post.
In grad school, I primarily studied Ancient Greek philosophy. Mostly Plato. My only published piece of academic research is on his Atlantis myth.
But now I work at a tech company and I’m writing about artificial intelligence — a subject I’m very interested in but know little about.
Still, knowing what I do about Plato — and what Plato wrote about his famous teacher, Socrates — turns out to be the one advantage I have going for me.
Let me explain.
AI’s knowledge problem
There’s plenty of hype surrounding the benefits of AI. A recent Stanford study aimed to quell fears about potential Skynets and Matrix-like dystopias. And, instead, highlighted the many advancements AI will make in areas like transportation, healthcare, education, public safety and security.
But even though evil machines are unlikely, that doesn’t mean there isn’t cause for concern as AI progresses.
It turns out, with all of AI’s promised intelligence, a machine’s actual knowledge is and could continue to be a very real problem.
What’s the difference between intelligence and knowledge, you ask?
This is where Plato comes in.
According to Plato it’s possible to be intelligent but have little-to-no knowledge about the world or, more importantly, yourself.
And there’s nothing wrong with this.
In fact, as I’ll talk about in a minute, to know that you are not knowledgeable was seen by Plato as being a marker of an extremely advanced intellect. This is why he devoted much of his early writings to his famous teacher, Socrates.
(Side note: Socrates did not produce any writings. So much of what we understand about him comes to us from his student, Plato, who often cast Socrates as the main character in his dialogues. Hence, this is why I will be talking about both Plato and Socrates.)
For most people, though, intelligence is assumed to be proof that they are extremely knowledgeable. The problem is that lots of other assumptions about the world follow this major one.
And, what’s worse, when you start questioning these assumptions, you see that a lot of them have almost no basis in reality.
That’s why most of the people in Athens hated Socrates.
He liked to stop his fellow citizens in the street or at parties, and ask them a lot of annoying questions about the nature and meaning of certain ideas, like justice or goodness. In a relatively short period of time, Socrates’s questions revealed that these people — who thought they knew what these concepts meant — had never actually thought about them and, as such, didn’t really know anything about them. Their knowledge of these important concepts were filled with all kinds of unquestioned assumptions.
AI has a similar sort of problem, primarily because machines are taught by human beings, many of whom — like the ancient Athenians and most of us today (including yours truly)— have unquestioned assumptions about the world.
And here’s how these sorts of unquestioned assumptions can lead to unintended problems.
Motherboard reported that, a few months back, an AI called “Beauty.ai” — developed by researchers in Russia and Hong Kong (and backed by Microsoft and Nvidia) — ran an online beauty pageant where 600,000 men and women from around the world sent in selfies. The AI judged the entrants based on facial symmetry, wrinkles, and age. It then picked 44 winners that it deemed “most attractive.” Almost all of them were white.
ProPublica ran a similar story in May of last year. This time it was about bias in a risk assessment software that judges use to make sentencing decisions for criminal cases. African American defendants were 77% more likely to be identified as being a “higher risk” for future offenses than white defendants. This score was based on 137 questions given to the defendants. And, while none of the questions mentioned race, the end result was a racial bias in favor of white defendants.
These recent news stories point to a painstakingly obvious fact about AI. As Nathan Collins writes, “Just as we learn our biases from the world around us, AI will learn its biases from us.”
Bloomberg reported this past June that the majority of AI researchers are men and highlighted two sets of statistics. The first was that only 17% of computer science graduates today are women. The second was that, at 2015’s premier AI conference, only 13.7% of the attendees were women. Margaret Mitchell, a Microsoft researcher, is quoted as saying that AI has a “sea of dudes” and it’s a problem. As the article notes: “If everyone teaching computers to act like humans are men, then the machines will have a view of the world that’s narrow by default and, through the curation of data sets, possibly biased.”
How can we avoid this problem going forward?
Well, again, this is where Plato comes back in.
Why Plato wrote that Socrates was the wisest man in Greece
Plato tells us that Socrates’s annoying questions reached their tipping point in 399 BCE when he was tried and then (spoiler alert) subsequently sentenced to death. Although the official charge against him was “corrupting the youth of Athens,” most everyone, including Plato, knew that it was because Socrates had made almost everyone he encountered feel like an idiot.
In the trial, Socrates decided against appeasing his critics, and instead took a different approach.
He declared himself the wisest man in all of Greece.
But this label, he told his fellow Athenians, was not self-applied. It was given to him by the Oracle at Delphi — who was a big deal in Ancient Greece.
For a long time, Socrates said he didn’t want to believe that he was the wisest man in Greece. He couldn’t. Although he loved to question others about complex topics, Socrates constantly admitted that he, himself, knew nothing.
That’s when the light bulb (or the torch if you want to be historically accurate) went off in his head.
He was the wisest man in Greece because he knew that he knew nothing.
For Plato, Socrates’s acknowledgment of his own limited knowledge was game-changing. It was proof that his intellect was far beyond his contemporaries — primarily because he had what they didn’t have: Perspective.
But how does AI get to this same place?
It starts with the researchers themselves and how they access and distribute knowledge.
Openness and transparency as a way forward
In December 2015, Elon Musk made headlines with his decision to join other high-profile investors in backing OpenAI — a non-profit research initiative focused on conducting AI within the philosophy of open source. Musk’s huge investment was characterized by much of the news media as his effort to prevent Skynet.
But as Greg Brockman, OpenAI’s co-founder and CTO points out, OpenAI is about much more than preventing villainous machines, it’s about creating an AI that benefits humanity.
And the way to do that is by doing it in the open for others to see and comment on.
In a way — and not to stretch the analogy too far — it’s AI researchers opening themselves up to a potential Socrates. Some other researcher or group who can look at what they did and find the unquestioned biases that slipped through the cracks.
Judging from the recent wave of open source AI projects — both from big proprietary corporations and small start-ups — it seems that the majority of research projects out there are warming to this approach.
On my end, I still have a long way to go toward understanding what’s going on in these research projects. Still, Plato’s discussion of Socrates has helped me get there a little bit easier.
While I don’t know what AI researchers know, I know some of what they don’t know.
And that helps. At least a little.
Note: Please drop your knowledge bombs on me in the comments below. Tell me what I’m getting right. And, more importantly, tell me what I’m getting wrong — but please also tell me why. Thank you.