Pedro Domingos on The Origins of Human Knowledge

Farnam Street (Shane Parrish)
The Startup
Published in
10 min readJun 5, 2018

The quality and shape of human decision-making is taking a profound leap forward thanks to new partners: artificial intelligence and machine learning.

Many intelligent people view AI with alarmism, but not Pedro Domingos, the University of Washington professor working at the cutting edge of machine learning. He wrote The Master Algorithm, which I swallowed whole and have been digesting every since.

I was fortunate enough to have a long and fascinating conversation with him over dinner one night which I hoped would never end — but that ended up leading to this interview, in which we explore new sources of knowledge, why white collar jobs are easier to replace than blue collar jobs, centaur chess players, and so much more.

The Excerpts below are from my interview with him for the knowledge project, a podcast exploring ideas, methods, and mental models, that help expand your mind, live deliberately, and master the best of what other people have already figured out.

THE ORIGINS OF HUMAN KNOWLEDGE

The knowledge that we human beings have that makes us so intelligent comes from a number of different sources. The first one which people often don’t realize is just evolution. We actually have a lot of knowledge encoded in our DNA that makes us what we are. That is the result of a very long process of weeding out the things that don’t work and building on the things that do work.

Then, there’s knowledge that just comes from experience. That’s a knowledge that you and I acquire by living in the world, and that’s encoded in our neurons. Then equally important, there’s the knowledge that’s the kind of knowledge that only human beings have which is the knowledge that comes from culture, from talking with other people, from reading books and so on. These are the sources of knowledge in natural intelligence.

The thing that’s exciting today is that there’s actually a new source of knowledge on the planet and that’s computers. Computers discovering knowledge from data. I think this emergence of computers as a source of knowledge is meant to be every bit as momentous as the previous three were, and also, notice that each one of these sources of knowledge produces far greater quantities of knowledge far faster than all the previous ones.

For example, you learn a lot faster from experience than you do from evolution and so on and it’s going to be the same thing with computers. In the not too distant future, the vast majority of the knowledge on earth will be discovered and will be stored in computers.

Computers will be both discovering it and applying it. In fact, both of those things will generally be done in collaboration with human beings. In some cases, it will be the computers doing it all by themselves so, for example, these days there are hedge funds that are completely run by machine learning algorithms. For the most part, a hedge fund will use machine learning as one of its inputs but there are some where the machine learning algorithms, they look at the data, they make predictions and they make buy and sell decisions based on those predictions. There’s going to be the full spectrum.

THE (UN)CERTAINTY OF LEARNING FROM MACHINES

It’s certainly quite uncertain. Any knowledge that you induce from data is necessarily uncertain because you never know if you generalized correctly or didn’t. Sometimes, you can actually machine learn knowledge that is actually quite certain. If you know well how the data was generated and you’ve seen enough data, you can say that with very high probability, the knowledge that you’ve extracted is correct.

Conversely, a lot of the knowledge that we have from evolution and from experience and from culture, we often tend to think of it as much more certain than it really is. We have this great tendency that’s been well studied by psychologists to be overconfident in our knowledge. A lot of the things that we take for granted, actually, it turns out that they just ain’t so.

Evolution could have evolved into a local optimum when there’s actually a much better one a little bit further away. You might have learned something from your mom that told you to do things this way but actually turns out that that’s wrong or it’s outdated and there’s a better way to do it. There’s uncertainty on all sides of this and it could be more or less depending on the problem.

…I think where machine learning has a big advantage over human intelligence is that it can take in vastly larger quantities of data. As a result of which it can learn more and it can also be more certain if that data is very consistent with this piece of knowledge. Where it has a disadvantage is that machine learning is very good, machine learning today is very good at learning about one thing at a time. The thing that humans have is that they can bring to bear knowledge from all sorts of directions.

Think for example of the stock market, the traditional machine learning algorithms when people started using neural networks to do this in the ’80s, they just learned to predict the time series from the stock itself and maybe other related time series in a way that human beings couldn’t, but human beings could know that, “Oh, today there began a war between Russia and the Ukraine.” Humans can try to factor this in whereas the algorithms couldn’t or the FED just said that it’s going to raise interest rates or something like that.

Human beings can bring a lot of knowledge to bear that the algorithms don’t have. Having said that, what we see even from the ’80s to now, is that the machine learning algorithms are starting to use a lot of these things.

For example, there are hedge funds that trade based on things like what’s being said on Twitter. If you pick up certain things on Twitter, then maybe this is a sign that something is going to happen or has happened or a recession has become more likely or whatever.

They can learn from things that wouldn’t occur to people. Like for example, I know there’s one company that, they use real-time traffic data and they use satellite photos, I’m not kidding, of parking lots to see how many people are shopping at Walmart, let’s say, and other stores to decide whether their business is becoming better or worse. I think as time goes forward, the machine learning will get better using a broad spectrum of information. I think for a long time, there will still be types of common sense knowledge that people have so I don’t think, for most things, the human element is going to become unnecessary very quickly. But maybe ultimately, it will.

WHITE-COLLAR AUTOMATONS

People often think that the easiest jobs to automate are like the blue collar ones, but actually our experience in AI is that it’s actually more of the opposite, it’s often white collar jobs that are easier to automate. For example, things like engineering and lawyers, doctors, etc. We’ve already talked about medical diagnosis as an example, where something like, for example, construction work, is very hard to automate because that type of work takes advantage of abilities that evolution took 500 million years to develop. They seem easy because we take them for granted but things like being a doctor or an engineer or a lawyer that you have to go to college to do well, you have to go to college precisely because these do not come naturally to human beings. Machines don’t have that type of difficulty so in some ways, the jobs that are easy to automate are different from the ones that people often think are.

Machines are remarkably better than human doctors at doing all types of medical diagnosis, not just from x-rays but from symptoms. You have a patient, you have their symptoms, what is the diagnosis? Even very simple machine learning algorithms running on fairly small databases of patients like maybe with only hundreds of thousands of patients, typically do better than human doctors. Part of the reason is that algorithms are very consistent whereas human beings are very inconsistent. They might be given the same patient in the morning or in the afternoon and have different diagnoses just because they’re in a better mood or they forgot something. Human beings are very noisy in that regard. If you are the patient, that’s actually not a good thing so I think for things like these, machine learning is a very desirable thing to use.

In the particular case of medicine, it’s not used more already because, of course, the doctors are also the gatekeepers of the system and they’re not very interested in replacing themselves or the jobs that they like best by machines. Eventually, it is going to happen and it is starting to happen, for example in situations where doctors are not available and so nurses can use this for patients that need constant monitoring or in low resource situations where people can’t afford the doctors and so on.

THE INTRICATE MESH OF MAN AND MACHINE

The best chess players in the world today are what are called centaurs in the community. They’re a team of a human and a computer so human and a computer, together, can actually beat the computer. This is precisely because the human and the computer have complementary strengths and weaknesses. The same thing that, I think, is true of chess, I think is true of medical diagnosis, it’s true of a lot of other things.

For example, there’s more, of course, to being a doctor than just doing diagnosis.

There’s interacting with the person, there’s reading how they’re feeling from how they interact with you, all of these things computers are not yet able to do today. Maybe they will in the future and certainly, the boundary between what is best done by the machines and what is best done by humans will keep changing but I think for the foreseeable future, in most jobs, it will be a combination of human and computer that works best.

I think over time we will see more and more things being done by machines and as we get comfortable with it, we will have no problem handing control to machines. Airplanes is an example. Every commercial airliner is actually a drone. It’s flying itself and, in fact, it would be safer if it was completely flown by a computer. Pilots tend to take the controls at landing and take-off which are actually the more dangerous moments and they make more errors than the computers do but people feel comfortable having a pilot in the cockpit. We already have two people in the cockpit instead of three and then, we’ll have one and eventually, we’ll have zero. I think there are a lot of decisions that we will gradually become more comfortable with.

It’s partly a matter of just psychologically adapting ourselves to this notion that the machines are making these calls and trusting them that they are making the right calls and that they would do what we would do if we were making the calls ourselves. I think at the end of the day, there will be some things that we will always reserve the right to make our decisions about and I think those are the highest level decisions. I think the decisions on how to accomplish our goals, like I want to get from here to New York, I made that decision but how I get flown there, well sure, I’m perfectly okay with the plane being flown by an algorithm or maybe the car that drives me to the airport also being an algorithm. Maybe I decided to go to New York because of something that some computer advised me about where it said like, “Oh, there’s this great whatever thing that you should do in New York. There’s going to be this festival that you should attend. There’s these people that you need to meet.” That decision, even though it was partly a recommendation from the computer, I probably will always want to make it myself. I’m not just going to go to New York because the computer told me to.

I think, what we see today is already this very intricate mesh of what’s decided by humans and by computers. Somebody wants to find a date, well they may have a dating site to help them find a date but then they decide to go to dinner with them so that’s their decision, but then maybe they have to use Yelp to decide where to go to dinner and then they drive the car to dinner but it’s the GPS that’s telling them where to turn, although it’s still them driving. This is a very intricate mesh of the human and the machine and I think it’s only going to get more intricate in the future. Ultimately, I think, most things will be done by machines except the really key decisions that people will always want to retain even though they make them with advice from the machines.

The key advantage of machines is that they can take an unlimited number of variables into account, very much unlike humans that are much more limited. Our brains are very good at things like vision where we do take millions of variables into account and motion and whatnot but for other problems, we are very, very limited and the machines aren’t. What’s going to happen is that the machines are going to be able to learn much more complex models of the phenomena than human beings ever could and this is good because with those better models, we can make better decisions, with a better model of the cell, we can cure cancer, and so on and so forth. Having said that, it’ll still be important for people to trust what the computers are saying and if they don’t understand it, they won’t trust it.

I think what’s going to happen is that partly, the learning algorithms are going to get better at explaining to people what they’re doing and some of them are better than others and there’s no reason why they can’t be. Something that you hear a lot today is like, “Oh, learning algorithms are black boxes, we’re just going to have to learn to live with them.” Learning algorithms don’t have to be black boxes. There’s actually no reason why we shouldn’t be able to say to the Amazon recommender system, “Why do you recommend that book to me?” or “I just bought a watch, please don’t recommend more watches because I don’t want to buy a watch now. That’s the last thing I want to buy.” You should be able to have this type of richer interaction with a learning algorithm.

AND BY THE WAY: COULD A SELF-DRIVING CAR RACE THE INDY 500?

I think we could at this point, and it might actually win. In the past, the technology wasn’t ready and then once the technology is ready, the Indie 500 would have to let a self-driving car compete. I actually wouldn’t be surprised if that happened in the next few years.

Listen to the entire conversation.

--

--

Farnam Street (Shane Parrish)
The Startup

Mastering the best of what other people have already figured out.