A Conversation with Eric Schmidt and Jared Cohen

By Philip Larrey

Big Data
(Eric Schmidt and Jared Cohen)

Upon completing Futuro ignoto (Unknown Future), my first book of interviews dealing with the impact of the new digital age, I never would have imagined that I would be sitting down with two key people of our time and having a conversation about the digital world. Eric Schmidt is Executive Chairman of Alphabet, Google’s parent company, and Jared Cohen is the head of Jigsaw, previously known as Google Ideas. Together they wrote the book The New Digital Age: Reshaping the Future of People, Nations and Business, published in 2013. Our conversation, which follows, took place in Rome before their meeting with Pope Francis in January 2016 in the Vatican.

Reading their book was what first convinced me to begin inter‐ viewing important people in our society in order to elicit their responses concerning the development of digital technology. The two of them are not only expert analysts of the digital age, but in a very real sense are its architects, too. What I learnt from them was both consoling and, at times, unsettling. I leave it up to the reader to find out the difference.

Image courtesy of Pixabay

***

Jared: Let’s start with ‘What has changed since we wrote the book?’

Okay.

Jared: I think if you just go to the Introduction and you look at some of the numbers, you can see what has changed. The book was written speculating about what happens when billions of new people come online, and the reality is that this has happened faster than we expected. In this sense, we are seeing some of the hypotheses tested, we are seeing some of them taking place. A lot of things that we talked about in the book have happened faster than we expected. Some shocking things that we wrote about were, for example, WikiLeaks (Snowden hadn’t even happened yet when we published the book), and the Arab Spring, which was still very much a sort of open question about what was going to happen. There was no rise of ISIS, which has proven that, as a terrorist group, it is able to occupy both physical and digital territory.

I think we didn’t go far enough in the book in talking about some of the ways that technology transforms identity. If you ask Eric and me today about the future of identity, I think we would say that everybody will have multiple personalities because people are basic‐ ally proliferating identities of themselves, walking around with a virtual entourage. You have your work identity, and you have your family identity; maybe you have your sort of ‘Let’s behave well’ identity. Who you are is an aggregate of all those different personalities. This creates an interesting way to think about identity because we are all increasingly splitting our time between the physical domain and the digital domain. Everything from how healthy we are, to how we think, to who we are and so forth, is an aggregate of all of that.

Eric: Let me answer the same question by saying that our work was published right before Snowden and right before ISIS, and Snowden and ISIS were both significantly worse than we had anticipated. In Snowden’s case, the amount of leaks and the amount of surveillance, the tremendous controversy within the United States over National Security Agency behaviour, was far, far greater than we had expected. Of course, we didn’t know the details. America today is still suffering through the consequences of the activities, the leaks, the debates and so forth. That was followed by the rise of this horrific group ISIS, which we argue is the first truly digital terrorist organization. It is inconceivable to any of us (and I’m sure also to our readers) that you would use Twitter, Facebook or YouTube for these videos that are designed to terrify. We have subsequently learnt that these are the recruiting videos for other evil people. We, in our book, did not understand that you could sufficiently terrorize people using the online media to recruit people to your evil organization. There is no question that the internet has aided the creation of this caliphate and evil idea by taking people who are marginalized or unhappy in other countries and giving a way for them to be reached and recruited: something we never anticipated. Both of those — Snowden on the privacy question and ISIS now on the government question — are driving much of the political discussion in the United States and in Europe. You can understand the refugee crisis which is going on today, this humanitarian crisis, as a natural consequence that started with the profound use by ISIS of terror.

Jared: To add a couple of things to what Eric said, let’s talk about the States for a minute. To continue down the list of things that happened after we wrote the book . . . One of the things we talked about in the book was that cyber attacks would be so significant as to warrant physical retaliations. Look how the North Korean hack against Sony resulted in increased sanctions on North Korea. To my knowledge, that’s the first time in history that we’ve seen additional sanctions imposed upon a country directly in response to something that happened in the cyber domain. A second example is if you look at Russia’s annexation of Crimea, it’s an illustration of a point that we speculate about in the book, which is ‘We will never see a physical war that’s not also accompanied by a cyber war’. So, if you look at, for instance, the distributed denial of service [DDOS] attacks that were happening around the world during that particular time, a disproportionate number of them were targeted at both Ukraine and Russia. You literally see a cyber war between the two countries and a physical war between the two countries, happening in tandem.

In the book, you mentioned the fact that the future depends on what we do with machines. I think it’s in the Introduction, you say, ‘Forget sci-fi movies and scaring people; what happens in the future will depend on us.’ Would you still reiterate that?

Eric: I think we would, and the reason is that you could imagine the technology architecture being changed to mitigate or encourage the kind of behaviours that we have been discussing. For example, you could make it much harder for the government to spy on its citizens. You could also make it much easier for the government to spy on its citizens. You could, for example, require by law that the companies give the government the keys to encryption. There are people who are proposing that. All of these change this calculation. With respect to ISIS, you can imagine artificial intelligence systems that automatically detect hate speech and take it down before anyone sees it. Let me offer an extreme case. Let’s say there is a voice, an Islamist preacher who preaches death and destruction to the entire world in a way that is so nihilist that his voice needs to be stopped. You could imagine technology emerging, or being built, that would detect it in its many forms and literally delete it. I’m not encouraging that; what I’m saying is that it is possible technologically. If you believe that ISIS is the source of evil and instability in the world (and many people do), that’s an example of something you could build.

Okay. An interesting comment that my students almost always make to me is ‘Professor, why should we have to learn this when we can just google it?’ Do you see technology changing the way we teach and the way we educate, the way students interact with digital technology?

Eric: In a decade — not now but in a decade — there will probably be a tool that you will have in your classroom that will adapt the teaching to the specific strengths and weaknesses of the students. The way it determines this is that it knows the students, and it knows that some students don’t type very well; they may however read very well; they may like to read literature, but don’t like poetry. The tool says, ‘Okay, Father Larrey, so what are you trying to do?’ And you say, ‘This is what I am trying to accomplish.’ It then says, ‘For these students, poetry works, and you need to spend thirty minutes on poetry; and those students hate poetry, but you can get them with a narrative from a movie.’ And then you’ll say: ‘Okay, class, I’m going to talk a little bit, and I’m going to work with the poets over here, and then I’m going to work with the movie people over here.’ The computer will help you understand the unique learning paradigm of each student. The reason we know this is possible is that we know that we can pattern‐match against people’s abilities. Everybody is different: they learn in different ways, so we can train against that.

Jared: The other thing I would add to this, which is not so much a technological answer, is the following. You asked the question, ‘What’s new in the classroom?’ What is new is that students will have more options than at any other time in history. On the one hand, let’s take the comment that your students had about Google. As a professor, you think to yourself, ‘Does having access to Google remove the influence of the classroom?’, and then just send them off on a journey of searching for things online. If we accept that most of the world’s population still learns through rote memorization, and if we agree (as I would imagine that we do) that this is a problem, then the question from the student goes more like, ‘Why should I sit in the classroom and just learn something that I am being told to memorize when I can go home and find it? Here I finally have an opportunity in my life to engage in critical thinking.’

Absolutely.

Jared: It depends on which student you are asking and in what context. This example captures the essence of technology. There is a good story and a more challenging story with any type of technology that we are talking about.

Many of my colleagues dislike Google, because of that. They think that it leads students to be lazy, that kids don’t want to learn anymore. I obviously don’t share that opinion, but I’m sure you have heard that before.

Jared: When you say ‘your colleagues’, I assume you mean other professors?

Yes, other professors. Sorry, Jared, if I can interrupt, there was a discussion several months ago over whether or not to have Wi-Fi in the classrooms. Many professors said no, because they don’t want students using Wi-Fi during class lessons. I said yes, and if students are updating their Facebook profile, then that’s the fault of the professor.

Jared: I think your colleagues who voted against the use of Wi‐Fi are missing the long‐range benefits of the technology. Look at one of the huge benefits that the internet brings: we have more visibility into what’s happening in the world today than in any other time in history. It doesn’t mean we are any better at responding to crises, but there is literally not an atrocity that can happen on earth that the world doesn’t see. I would argue long term that this creates a demand for us to act. That’s how I would answer to your other colleagues.

Eric: Let’s talk about the core question. There is a learning mode in which you sit and your attention is completely focused on a professor’s knowledge. You sit there, and I did very well at that (when I was a student). I’m not sure everyone does very well with that. People learn in different ways, and my observation when I go to dinner (which is social and also work) is that people are constantly spouting facts off, which I check using Google. People say something, and I say, ‘Let me check that’, ‘Well, that’s okay’, or ‘Frankly, that’s not true’. We were having a discussion about the supporters of Donald Trump at dinner two nights ago. In America, everyone has lots of opinions. Well, I happen to know the facts. I get out Google and I read how the demographics are distributed for Donald Trump. Then everyone says ‘Okay’, and the conversation goes on. Now, did they learn from that moment? I hope so.

I think so.

Eric: My view of Google is that it is a wonderful source of facts, which can lead humans to think about them: it is a teaching tool, so I disagree with that view that sees Google as a disservice. There are plenty of services which are time‐wasting, like Twitter, Facebook . . . these are largely time‐wasting in my view. This is my own opinion, because they are essentially social (what are other people doing and so forth), whereas a mechanism for learning and thinking . . . On the flight yesterday, I watched a television documentary on a famous case in America in which the defendant may or may not be guilty, and my conclusion was that he was guilty, so I went to the internet using Google and I read more of the story and I concluded that I was still right, the defendant was still guilty. That’s learning. It is supplemented by Google.

[The interview was concluded here but resumed later with Eric alone.]

Can we talk about artificial intelligence? You’ve been thinking a lot about this lately. Can you speak to us about your thoughts concerning AI? What do you think is AI, are you happy with Google’s development in this field, and do you have some concerns about the future?

Eric: A couple of things. In the first place, artificial intelligence can be defined as computers doing things that humans appear to be capable of doing. Human kind of activity in a computer. We use AI at Google to do many things. We use it to provide better search results, better advertising, we use it for speech translation, we use it for video and photo recognition, to name just a few examples. At the moment, it is pretty tactical. Think of it like a machine: a car is a transportation system used to transport people. It is an object, a thing. Right now, our use of AI is very tactical: it makes something more capable, making it better and better. We’re not at the point where the real questions about artificial intelligence come into play, and won’t be for a while.

And those questions would have to do with, for example, the job market?

Yes, we’re not there yet, we’re not having a negative impact on jobs although people are concerned about that. There is no issue about asking whether these things have souls or do they think independently. We’re not anywhere near those science‐fiction questions.

Okay, but that could come up down the line?

You never say never.

But you’re not particularly concerned about that now?

Not in the short term. Not in the next few years.

What is your understanding of what we call ‘machine learning’?

Generally, at the moment machine learning and AI are pretty much the same. In machine learning, instead of programming an outcome, you train to obtain an outcome. The simplest example would be this: I want to recognize a zebra. On the one hand, you could write a code that would indicate: ‘Look for an animal that has stripes of this kind’; on the other hand, you could show it a lot of pictures of zebras and indicate, ‘This is a zebra, this is not a zebra, this other is a zebra, this other is not, nor is this.’ The latter is called ‘training’, the former is called ‘programming’. The systems that we are using today are ‘training systems’.

That was a huge leap for digital technology!

That was a very big gain. That is why these systems are so good at what they do.

Yes. Are you happy with DeepMind?

Very happy. As you know, they defeated the Go world champion, Lee Sedol, four games out of five last March in Seoul with AlphaGo. They are making fundamental strides in better and better ways of implementing the algorithms.

That is an amazing team. I met them in London. Can you address the issue of the ‘right to be forgotten’, which Google has offered users in the European Community as a response to their obligation in terms of privacy? It appears that Google is being held as ‘judge and jury’: if you actively suppress links to compromising information, you might be accused of being like Orwell’s ‘Big Brother’; if you do not, you might be accused of not respecting privacy. There seems to be a dilemma.

We were forced to do this by the European Court of Justice. The court effectively made us both judge and jury over whether private information should be shown or not shown. As a company we have to implement what the court mandated. We don’t have a choice. We have never liked doing this, because it puts us in the position (as you pointed out in your question) of making these decisions, which we think should be made by governments. In other words, Google should not be arbiter of these questions because it is a private company. Nevertheless, we followed the law.

You may not enjoy doing it, but you seem to be doing it effectively, it seems to be working.

We are implementing exactly how the court described we had to do it. So, yes, it is effective because we were ordered to do so. We did not have a choice.

Do you think such a ‘right to be forgotten’ is going to be extended to other parts of the world, or just remain in Europe?

That’s really a question of speculation, and my guess is that it will not be extended.

Okay.

I think each country solves this problem differently.

This strikes me as important, because people are beginning to realize the significance of being forgotten. This reminds me of something you said in an interview a couple of years ago: you mentioned that you were glad that you grew up in a time when there was no Google, because you could make mistakes back then without everybody finding out.

Privacy is important, if nothing else because we all make mistakes as human beings. For example, some mistakes I made as a teenager have (thankfully) been forgotten.

What about teenagers today that make mistakes?

I can imagine that a lot of those teenagers are going to be unhappy that their mistakes are now on YouTube and on Facebook and on Twitter twenty‐five years from now when they are running for public office.

Do you think that we will learn to be more forgiving of people that make mistakes?

I hope so, but I’m not sure. At least in the American press cycle, your entire life as a politician is up for inspection, right?

Oh, yes, certainly.

I really don’t know the answer to your question. It’s an important question, but I don’t know how people will react.

Yes, it’s speculative. My own view is that we are going to be more tolerant as it becomes more routine to know more about each other, and about our flaws also.

Yeah. As you know, human beings make mistakes. Any kind of a mistake you make can be used against you by people who are fighting against you using the press. We’ve seen that in the American political system, and I would assume that this happens in Europe as well.

Absolutely. It is already happening. I tell my students to be careful about what they post on Snapchat or Instagram. The twenty-year- olds that I have as students seem to be more cautious and aware of the long-term effects. I think that as technology matures, this is what happens.

Let me give you an example. A sixteen‐year‐old girl ends up really drunk at a party, and there is a video of her misbehaving, drunk, doing things which no young woman should be doing. That video gets posted. How is that fair to that girl? It’s not fair.

But it happens.

Happens all the time.

Are you just going to leave it at that, it’s not fair?

I am certain that it is not fair, but I don’t know how to fix it. But I do think it is not fair. I do have a thought on that, though. Especially when it comes to young people, there should really be some tolerance. Young people’s judgement is usually not as good as that of adults.

Do you think sometimes we blame the technology for things which are really caused by human beings?

Using the example of the sixteen‐year‐old, someone posted that video.

But it’s not the fault of the technology: it depends on the judgement of people.

Yes. It is obvious that humans did it, so on both sides. Therefore, you have to hold humans responsible for those judgements.

Can you talk to us about driverless cars? Google has been a pioneer in this, and now you are teaming up with Chrysler. Do you think it is a safe and mature technology?

Well, let me ask you a question. How many people die on the high‐ ways of the world per year?

I have no idea. A million?

The most realistic number I have seen is about 1.3 million people out of eight billion are killed every year. Let’s say we could reduce that number by half. That would mean that 500,000 people would now be alive. I believe very strongly that computer‐driven cars, self‐driving cars, auto‐pilots, all of that technology is very useful for automobiles. We and other companies are trying to get that to happen within the next year or two. They are not going to be perfect, but there will certainly be fewer deaths.

Do you think that people are going to be okay with that, ‘they’re not perfect but there are fewer deaths’?

Again, we have tolerated 1.3 million people dying in car accidents, right? Just here in the United States (I don’t know what the number is in Italy), there are going to be 33,000 people that die on the roads this year.

That’s a lot of people.

That is a lot of people. How many US soldiers died in combat last year?

Far fewer.

Connected World

Fifty, one hundred, some number like that. Self‐driving cars are a very big deal.

You mentioned ‘a year or two’, so you think that the technology is mature now?

The technology works most of the time. It’s not completely perfect, but it is very close.

You’re saying that we can save a lot of lives through this technology, which seems obvious. Are some people afraid of driverless cars?

I’m sure they will be, but when they understand that they are safer, they will get over their fears. For example, some people go to the bank, and they want to talk to a human and not use the ATM. That’s fine. There will always be cars which you can drive as well. But if you want to get the number of deaths down, you will want to use self‐driving cars.

Are you going to give these cars a kind of ethical system, or a set of rules?

They have a set of rules. In general, their job is to protect all the humans. That means the humans in front of them, humans in the car, etc. We have heard of these scenarios in which the car is faced with a pedestrian on one side, a dog on the other, a child in the mid‐ dle, and how to make the right choices. But those scenarios don’t really happen.

Okay [laughs]. Because those are really interesting ethical quandaries.

When we get to the point where that is the hardest problem, we will deal with it. But right now, we’re just going to stop the car quickly.

Exactly. The more driverless cars are out there, the safer we will be. The problem is then going to be real drivers who are unsafe.

Yeah.

Where is Google headed in terms of healthcare? Can we expect some interesting breakthroughs in the near future?

We are now working not on drugs, but on devices. We are very interested in medical devices that can help monitor your health; we have built this contact lens that can measure glucose levels and a person’s blood state. We have partnerships of that nature in the group called Verily. It’s a long process. We say, ‘Let’s invent helpful tools but let others commercialize them.’ Our side is the Research and Development shop and then we use computer technology to make medicine more reliable. We have projects, for example, in cancer analysis, trying to use large data and DNA databases to help cancer patients. We have some doctors and a lot of programmers, as opposed to a lot of doctors and a few programmers, which is what everybody else has. Our niche will be that.

You conceive Google as more like an ‘incubator’?

For healthcare, yes, because we don’t have the ability to do large trials and we are not a big healthcare company.

Yeah, that requires a lot of money and time.

Maybe in the future but not in the short term.

People tend to look to technology as a type of ‘saviour’, don’t they? As the technology gets better, we live longer.

There is no question. When you are eighty years old and you have some type of cancer diagnosis, you really do want the drug which will stop the cancer. The way those drugs will be invented will be largely on the basis of DNA analysis, new algorithms, new studies and a lot of risks in the medical profession. In your case in thirty years from now, you will be happy that they did all that work.

Absolutely.

My grandfather had a heart attack at sixty‐five, and today in America heart attacks are relatively rare in the young age group. The mechanics of heart disease and by‐passes and so forth are well understood. All of that is due to the research that has been going on in the last twenty or thirty years.

We are going to be seeing some amazing things come out. You talked about the device for measuring glucose for diabetes patients: I have friends who are diabetic and they would love to be able to use some- thing like that.

Diabetes is a really unpleasant disease to have, so can we help fix that? I hope so.

What about ‘big data’? Is it a friend or a foe? Can the information that companies like Google and Facebook have about us be potentially disruptive?

Big data is a reality. The computers which we use every day naturally collect that data. That raises certain questions like who is using the data, and what are they using it for? I argue that Google uses big data to provide valuable services, and if we were to violate your privacy, you would stop using us. More importantly, we would be sued by privacy activists. There is always this question of large corporations and the data that they assemble. Corporations have many reasons to keep that secret. I am actually more worried about the governments’ big data, because governmental systems tend to be poorly architected and easily broken into. In America, we have the Office of Personnel Management which has a lot of people’s files; a bunch of IRS databases were leaked by hackers (who are criminals). I am sure the incentives for Google, Facebook, Yahoo, etc. are to never allow that to happen. They are not going to be perfect, but they work well.

Yesterday I was reminded that it’s against the law to steal someone’s Social Security Number, or to use it without the owner knowing. But in reality my cell phone number is more important than my SSN. Do you think that is accurate?

That’s very interesting because nowadays your cell phone number is becoming more important in terms of your identity. It is possible that your cell phone number will become your primary means of identification. In the American system, Social Security Numbers are not a legal ID in the sense that they were not issued as a national ID. They are used in the financial industry for banking and also for tax compliance. But I don’t use my Social Security Number every day, except for financial transactions. But I use my mobile phone all the time, so I agree with your point about cell phones being more crucial.

Do you find that there is collaboration among the industries based on data sharing? Let me ask you specifically about insurance companies. They have access to a lot of our information, and they make insurance policies on the basis of analysing big data which humans can no longer process (because of the amount of information). Do you think that is a positive trend?

I think it’s probably okay. Again, we are careful about data sharing for precisely this reason: we don’t want our data to be leaked by others. There are some natural limits to data sharing. Most people want the government to share the data, and they do not want companies to map the shared data. I’m sure there will be more and more restrictions on all that.

I think that’s the tendency. You are right. Two personal questions to conclude. Why did you want to talk with Pope Francis? What can you tell me about that meeting?

Well, you were there, so you can recall what we talked about. I travel the world talking to leaders about the internet, and I had no idea what the Pope’s view of the internet was (so I wanted to ask him). He is obviously a very thoughtful man, and my impression was that he cares a lot about how it is affecting the way people interact with each other. He commented that he is worried that people are no longer having dinner‐table conversations because they are all on their smartphones or tablets.

Right.

He’s concerned that this technology be available to everyone, not just the elite. He listened to my view about the spread of technology. If I can help His Holiness and you guys in other ways, let me know.

We are working with the YouTube people and trying to see how we can best use that avenue to transmit the Pope’s message. It takes time, but we will make it happen.

I am a strong proponent of getting the Pope a two‐billion‐viewers TV station. Everyone who is Catholic can see the Pope and humanize the Pope and his message. It’s just a new technology to get the Pope’s message out, a message that’s been true for hundreds of years.

The Holy See is very grateful to you, because the Pope has an amazing image and persona in the media, yet very few clicks. So, we need to do something.

There is obviously something wrong, something we can be doing better.

Here is the last question. I think you’ve been asked this before. What motivates you today? By almost any standard, you have achieved so much. What, in your professional life, still makes it meaningful to go to the office every morning?

My own view of life is that you have to make a difference in some way. That is why God put you on earth, and you have to maximize whatever skills you have and you need to have a good time doing it. A combination of impact and enjoyment is the best life, and if you look at health and happiness and income levels, it is all correlated with: ‘Are you doing something you find meaningful, and are you happy doing it?’ I am in the bizarre situation where I can decide how I spend my time, and so I care a great deal about spreading the message of the internet and the empowerment of individuals. There are always issues. To me, at the end of my professional career, to be able to have had the kind of impact that I have is very satisfying. Remember, I started off as a programmer.

You were a computer engineer?

Yeah. It’s very fulfilling. As an example, yesterday (as you know) we met with President‐Elect Donald Trump here in New York, along with other leaders of digital technology. Well, I asked myself, ‘What am I doing in this room?’ I’m just a boy from Virginia [laughs].

I try and remind myself every day that I am very lucky: opportunity came my way and I took advantage of it.

I guess it was a combination of being the right person at the right time. It really comes down to motivation. You are motivated to continue doing what you are doing.

Yeah. Wouldn’t you?

I consider life a vocation. Although you didn’t use the word ‘vocation’, you implied a similar concept. I am also fortunate enough to love what I do. I don’t do it as a chore, because I have given my life to being a priest.

If you think about it, you get up in the morning, and you do what you want to do. In your case, you are serving your religion, you’re serving your students, you are an intellectual, you are writing a book. Sounds like a pretty good deal to me [laughs]. Right?

Well, I think you’re a better writer than I am. You’re absolutely right. I do know many people who grudgingly go to work every day, and they don’t enjoy it but they have to do it to make an income.

Is there some way that we can make a difference? I don’t know any other way to say it. Everybody serves the world and their community in a different way. This is how I do it. There is a lot of correlation that when you stop, you die.

Absolutely.

It seems to me that we just say yes.

That’s a great way to end this interview: a significant message that people are going to take away from reading the book.

People will read your book. Let me know if I can help. I want to stay with you guys, I want to stay connected.

We will. Thanks so much. Take care now.