Watch Cognizant’s Ben Pring on Infinia ML’s Machine Meets World.

Cognizant’s Ben Pring Brings a Warning from the Future

Join Machine Meets World, Infinia ML’s ongoing conversation about AI

James Kotecki
Jun 9 · 20 min read

Episode Highlights

This week’s guest is Ben Pring, the Director of Cognizant’s Center for the Future of Work.

“…if we don’t want to be seen as the cigarette executives of 2040 or 2050, I think it behooves us, as people who love technology, who are in positions of some responsibility or a lot of responsibility, to use technology in a way that we won’t be the villains in the movies that are made about what went on now in another generation or two.”

“But we seduced our way into this world through our use of technology and giving up the data that we didn’t understand had any value. . . . I think this is a very, very big issue, and things like GDPR and other regulation in other parts of the world, California, et cetera, are very early, rudimentary attempts to try and get back on top of this.”

“This is the work that people of goodwill in the AI community, in the machine learning community — and clearly there are many people of goodwill trying to grapple with this. But I think these people, the people who want to be on the right side of this in history… I think we need to do a much better job and work much harder to figure out this trade-off. Because the forces, if you like, weighing against this are very, very powerful, commercially motivated, motivated from national security perspectives. It’s going to be a very, very tough genie to put back into the bottle.”

“We sort of spent the last 25 years laying the information superhighway, Al Gore’s information superhighway. We’ve sort of laid the tarmacadam of the Internet. And really, the next 25 years is going to be putting in place the stop signs and the yield signs and the traffic lights and the markings in the middle of the road so we can go 80 or 90 miles an hour or faster and still be safe.”

“Because at the moment, the information superhighway doesn’t have any of that real regulation, material regulation. . . . GDPR and things like that, that’s just the first attempt to do that.”

[Note: This interview is mis-dated; it took place on June 9th, 2020, one day after the publication anniversary of Orwell’s 1984.]

Watch the show above. You can also hear Machine Meets World as a podcast, join the email list, and contact the show.

Image for post
Image for post
Photo by Tom Parkes on Unsplash

Audio + Transcript

James Kotecki:

Hey! This is Machine Meets World from Infinia ML. I’m James Kotecki, and we’re talking artificial intelligence live with my guest, the Director of Cognizant’s Center for the Future of Work, Ben Pring. Ben, welcome.

Ben Pring:

Hi, James. Great to see you.

James Kotecki:

It’s great to be chatting with you again. We chatted about four years ago, back when I was at a different AI company, and you’ve been in this space for a while, thinking about the future of work. Your title is cool. Your title is futurist. Maybe we can start with that. What does a futurist do? What does that mean?

Ben Pring:

I’m still trying to work that out, James. I don’t particularly like that title myself. It sounds a bit pretentious, I always think. I’ve been an industry analyst, a tech analyst, for 35 years now. I was at Gartner for many years, and I still think of myself, in a way, as an analyst, even though I’m now working for a vendor, for Cognizant. My team, the Center for the Future of Work, we still really act like independent analysts. But looking at technology, in this particular gig, we’re trying to look a little bit further down the road. A Gartner analyst typically is kind of looking more at the here and now: contract terms, vendor selection, stuff like that. We tend to look a little bit further down the road at things that are coming towards our clients, perhaps on the edge of their radar. So yeah, it’s great fun. And as I say, I don’t particularly like the term, but as you say, it does sound quite cool.

James Kotecki:

It sounds cool. And so we’re here to talk about, in part, your new report, which is about what happens after the virus. But I want to start, actually, by conceptualizing it a little bit further back to a book that you wrote three or four years ago, and I’m going to read the title to make sure I get it specifically right. What To Do When Machines Do Everything: How to Get Ahead in a World of AI, Algorithms, Bots, and Big Data. So this is an AI show, obviously. So what did you mean when you said “everything,” as in, machines can do everything, in the title? What did “everything” mean when you wrote that three or four years ago, and what does “everything” mean today?

Ben Pring:

It’s a great way of phrasing that question. I suppose, to be honest, there was a little bit of clickbait in that. There was a little bit of exaggeration in that. I suppose if we titled the book What To Do When Machines Do 10 Percent of Your Work, it wouldn’t have been quite interesting.

Ben Pring:

Really, what we were trying to do then, James, was to tell people, and remember, this is four years ago, we started researching it and writing it about six years ago, was to kind of really signal to the clients that we work with, big corporations, big banks, big airlines, big government departments, that what was sort of bubbling up that perhaps people were kind of hearing about… And obviously we’d all been talking about AI in the movies and in books for a million years, but we were trying to signal that this was becoming something that corporations should take seriously. That it wasn’t kind of fringe, it wasn’t sci-fi anymore. It was going to be a critical determining factor between people that could kind of become really digitally enabled, really move into the future of their work, and people who, or companies who were perhaps just sort of wandering along, just thinking this is something they can continue to ignore.

Ben Pring:

So the title was deliberately provocative. Anybody who read the book, once you got a couple of pages into it, you realized that we weren’t actually saying “everything,” and in fact, we quite strongly contrasted our view of the impact of automation, in terms of substitution on human labor, with a famous report that many people know that came out of Oxford University in 2013, when they said 47 percent of jobs will go away. We never bought that logic. And we were sort of saying that that, in a way, with all due respect, from our perspective, was a bit of a head fake. That implied a very short-term, rapid, historically unprecedented collapse in employment.

Ben Pring:

But in reality, what was going to happen was fundamental change, incremental change, developmental change, bubbling up, becoming more and more important for big businesses and changing every job, but not in that dramatic way. But at the same time, if you thought because it wasn’t going to happen in that dramatic way, you could ignore it, that would be a mistake, too. So we were trying to navigate, if you like, tread that fine line between signaling this was a big deal, not trying to scare the horses, but telling companies they had to take this seriously. It was going to be a material part of competition going forward.

James Kotecki:

Let’s go right to the beginning of the pandemic, but not to the pandemic yet. Up until February 2020, did the evolution of AI in business surprise you? Did it go about as what you had predicted?

Ben Pring:

I don’t think it surprised us. I think the amount of interest that we generated through the book… Obviously a lot of other people generated the amounts of attention and invites to speak at events in person before this, and now on platforms like this. It’s been an extraordinary ride for us authors, the three authors of the book, just personally, and then Cognizant more broadly as we built out capabilities to make real some of the ideas that we’ve talked about, helping our clients turn ideas into reality. It’s been an incredible ride, and it still continues to be an incredible ride. People are still fascinated by this, and I think more and more people have got this memo now.

Ben Pring:

So the “everything” is still somewhat provocative, but boy, the amount of activity that’s going on… And I’m sure many people on this session today are either involved in that as practitioners or as consumers or as analysts watching it. I think we’d be a hard person to argue convincingly that AI was just hype and we’re in this Gartner-style trough of disillusionment and nothing’s happening. I don’t think anything could be further from the truth.

James Kotecki:

So you’ve mentioned mass unemployment as something that the other report kind of talked about, but you didn’t believe in as much. And you do address this idea in the book of, some people think there’s going to be massive job losses because of AI. You had a different perspective on that. But I want to flash forward to now, where we do see massive unemployment, not because of AI, but because of COVID-19. So many, many people are out of work. Some people think that that will increase the amount of AI and automation even faster as companies adapt to not needing as many people or never having needed those people in the first place, and now they’re gone, so they can just put the machines in. And on the other side of that, we see all these processes like unemployment claims, for example, that are rooted in computer technology that’s decades out of date and obviously needs some kind of massive upgrade, maybe with the help of something like AI. And so I wonder how you think about AI now in the context of COVID-19 and the employment and economic situation.

Ben Pring:

Yeah. It’s a good time to be a COBOL programmer, isn’t it? Coming back out of the woodwork. I think, in fact, a lot of people have kind of come to the similar conclusion that COVID is sort of almost acting as an accelerant into a future that people like me, other futurists have been talking about for a while. And I think, in fact, it’s quite interesting to think again about this notion of man versus machine, the human versus the machine, in the context of what’s going out, because I think you can see the two sides of the argument becoming quite refined and quite sharp.

Ben Pring:

On the one hand, certainly there’s a very strong argument that in a viral age, the viruses that software get are kind of less harmful than the viruses that people get. You could debate that. So there’s an added impetus, an added catalytical factor to deploy bots within call centers, within processing centers, and like you say, to really do scaled digital transformation of stuff that hadn’t been transformed and now, we can see, is really inadequate, really not fit for purpose in the modern world, particularly at a time of crisis like this.

Ben Pring:

The other end of the continuum, the other side of the argument, though, which is counter to that argument, I think, because you can already begin to see the optics of big companies crafting their marketing messages around the notion that we’re bringing people back into the workforce. I’m sure everybody saw, within days of COVID becoming real before our eyes, all the ads on TV, car companies, banks: “We’re all in this together.” It was incredible, how the messaging coalesced around that.

Ben Pring:

I think very soon, the messaging is going to coalesce around “We’ve brought back 10,000 people to work. We’ve brought back 20,000 people to work.” And there will be competitive upside and differentiation for that. And if the messaging doesn’t tally with the fact, “Well, we’ve just laid off 10,000 people in our data center or our help desk or in a customer service desk through the use of bots, in essence…” I think that’s going to be a tricky message and perhaps may inflame, obviously, what is a very combustible period of time for us all as more people are quite rightly realizing that one of the characteristics of this increasingly technological age that we live in is this sort of notion of platform-based economics, winner-takes-all economics. And we can see the winners who have leveraged data, leveraged AI, leveraged next-generation technology, getting wealthier and wealthier and more and more powerful, and pre-digital people, analog people struggling to keep up with that. And I think that explains a lot of what’s going on at the moment.

Ben Pring:

So that question is a complicated one to answer. And I can imagine there are a lot of boardroom people thinking that through at the moment, and perhaps the sequencing of how that plays out.

James Kotecki:

And beyond just the optics of bringing people back is the question of: Does this stuff work without people in the mix, without a human in the loop, oftentimes? Are we really ready for prime time where we can automate vast swaths of the American workforce? Or does the technology still rely on human input, maybe in different ways than we might have been expecting or having before, but in terms of making sure the data going in and the data coming out is actually accurate? This stuff is still obviously very reliant on people for a good deal of that, right?

Ben Pring:

Oh, yeah. There’s no doubt. Again, it goes back to our arguments in What To Do When Machines Do Everything book, the notion that this huge-scale, overnight, sudden replacement of people with machines, I don’t think is accurate. However, what you can see… And again, people who are close to this will be seeing this… Just as one example, and it’s a real drag… I can’t mention the name of this company. It’s kind of still at a stealth company. But I got a demo of what I would consider probably the most sophisticated, machine learning-based call agents technology that I’ve seen, and I’ve seen a lot. And this really kind of blew my mind, in a way, because the way the vendor, who will be coming out of stealth soon, was pitching this was that the technology was, in essence, a coach to the human.

Ben Pring:

And for people of my vintage, you may remember a movie called Broadcast News from the 1980s with William Hurt. It was based on a kind of TV studio where people had the presenters, as they do today, have talkback in their ear. So the producer is talking to them, saying, “Go to the advert break now.” This was basically an ML version of that, so the human could basically have a much more intelligent-based conversation with the customer.

Ben Pring:

And it was so good and it was so powerful that as I came away from that meeting and I put that in my head, together with the advancements in CGI and the notion of avatars and the notion of, on the dark edge of that, deep fakes, I could begin to see very, very clearly that that coaching would soon overwhelm the person, and the ability to replace that person with a CGI kind of avatar-based version of a call center agent. To be honest with you, James, that really kind of scared me a little bit, because that was perhaps this more dystopian view which, we’ve all kind of processed through that in our minds and perhaps damped that theory down and damped that kind of dark vision down a little bit. But it sort of came back, almost like a wave, another wave coming back into my mind. It was really quite powerful.

James Kotecki:

I want to stay on the topic of AI, but first I want to bring in the After The Virus report, which is the most recent thing that you’ve done, and talk a lot more about this in the context of COVID-19, the pandemic. There were, I think, nine sections of this report. Talked about everything from air travel to the future of working at home and all these things. One of the sections that caught my eye the most, especially when it relates to AI, was about privacy and security. And I want… Maybe just have you summarize a little bit what that section was positing.

Ben Pring:

Obviously this is, again, for people, I’m sure, many people on the call today, on the session today. We’ve been sort of trying to navigate our way through this data-centric world that obviously we’re increasingly living in, where the power of data is obviously, clearly more and more that key determinant of success or failure, but also recognizing there’s a very real dark side to the power of data. The book we wrote before the machines book was a book called Code Halos. This was in 2014, when, again, we were sort of calling out the role of data, the importance of data. And we wrote a chapter in that book called The Dark Side of the Code Halo, and we laid out a thesis and argument that really sort of said that there would be competitive advantage for businesses to treat data well, to maintain privacy, not to be creepy, in the way that we put it. And the companies that did act perhaps in less honorable, ethical ways would be punished by the market.

Ben Pring:

Fast forward to 2020. There’s been very little punishment for any businesses using data, abusing data, misusing data in any particular way. So that thesis, I don’t think, has come true quite in the way that we imagined. Perhaps we’re still working our way into that, and I know we’re going to talk a little bit about ethical AI going forward.

Ben Pring:

But my particular perspective on this has sort of changed quite a bit. I don’t know whether you can see over my shoulder in my office there. There’s a book called Surveillance Capitalism, written by a Harvard Business School professor called Shoshana Zuboff. I think if people haven’t read this, they should. It’s one of the most important books of our time, because Zuboff really changed the way I think about this.

Ben Pring:

And the key metaphor that she laid out in the book, which I think is completely central to what’s going on at the moment, again, with COVID as this accelerant, is that the way that data has been treated and monetized by, you know, no names, but the usual suspects in the last few years is akin to the way that the conquistadors treated lands in South America in the 16th and 17th century. And what she means by that is that the indigenous people of South America didn’t conceptualize of land as having any value. And yet when these foreigners came, these Europeans came and thought of land in a completely different way, recognizing the value of it, they basically had the land rights before the indigenous people realized there was any worth to that land. And her metaphor is, that’s what’s happened with data. Before we as individuals, or before 95 percent of companies have conceptualized of data as having any real value or having any mechanism to be able to monetize that, the game is over. And I think that’s very, very powerful.

Ben Pring:

So relate that to what’s going on with COVID and with contact tracing and things like this. But the rhetorical question I posted in the After The Virus report in this chapter you mentioned is: We may be healthy through sharing data on these things, but are we, at the end of this process, going to be free? And I know that sounds melodramatic, but I do think the stakes are this high. I do think the stakes are this existential.

Ben Pring:

And some people may have noticed that today is the anniversary of the publication of 1984, published in 1949 on this day. I read it as an impressionable, angry teenager, and I remember that the overwhelming thought I had coming away from that was, “How did that happen? How did Orwell’s England of 1948 turn into Airstrip One of 1984?” And that always went through my mind. Dave Eggers, some people will know his book The Circle, which came out in 2013. That was kind of Eggers’ answer to… Because I always thought of that book as the prequel to 1984.

Ben Pring:

But we seduced our way into this world through our use of technology and giving up the data that we didn’t understand had any value. And now fast forward to today. We’re sort of in this world, and we, frankly, don’t know how to reverse out of it. And I think this is a very, very big issue, and things like GDPR and other regulation in other parts of the world, California, et cetera, are very, very early, rudimentary attempts to try and get back on top of this. This is the work that people of goodwill in the AI community, in the machine learning community, and clearly there are many people of goodwill trying to grapple with this. But I think these people, the people who want to be on the right side of this in history… I think we need to do a much better job and work much harder to figure out this trade-off. Because the forces, if you like, weighing against this are very, very powerful, commercially motivated, motivated from national security perspectives. It’s going to be a very, very tough genie to put back into the bottle.

James Kotecki:

And you talk about being on the right side of history. But as we know from 1984, if you win, you can basically rewrite history to say whatever you want, right?

Ben Pring:

Every day. [crosstalk 00:20:42]

James Kotecki:

And of course, we see this playing out now in the protests as well, where people are being warned about using their phones, for example, in protests, or showing their faces to cameras in protests, because that can be used to produce some kind of algorithms, fed through algorithms that could then identify them later. Obviously for the last few years, we’ve seen this kind of technology in Hong Kong being deployed by police there to identify people that the state does not like. And so the challenge, I guess, is, you and I, we work at for-profit businesses, right? We’re in business to make a profit and be successful and move the economy forward like that. So how do people in business, like you and I, think about addressing these things in a way that is going to make a difference or reverse the course that you say that we’re on? Is there anything that we can do, realistically, at this point?

Ben Pring:

It’s a great question, James, and it’s one that keeps me awake at night, frankly. Well, again, topically, people have seen IBM’s announcement overnight, moving away from the development and sale of technology to support facial recognition in policing, and I applaud that enormously. I think that’s a very, very powerful statement the new CEO of IBM has made, and I take my hat off to him. So I think, again, it comes back to this notion… And I am nervous and cautious about sounding melodramatic about it. Because you’re right: The commercial motivations to do these things are extremely strong.

Ben Pring:

But I can’t help thinking… And again, some people, perhaps, like this analogy, this metaphor. Some people, perhaps, won’t. But if we don’t want to be seen as the cigarette executives of 2040 or 2050, I think it behooves us, as people who love technology, who are in positions of some responsibility or a lot of responsibility, to use technology in a way that we won’t be the villains in the movies that are made about what went on now in another generation or two. People who have seen the movie Thank You For Not Smoking. People who, you know-

James Kotecki:

Thank You For Smoking.

Ben Pring:

Yeah, Thank You For Smoking. Or even just going back even further. My third movie reference: The Graduate. 1967. “I’ve got one word for you, Benjamin: plastics.” I’ve been joking for a while now that you update that. You say, “It’s algorithms.” But the people in the plastics industry are not particularly held in high esteem now, because we can see the damage that has been done. The waste in the oceans, that those people weren’t particularly interested in the sort of damage that they unleashed, if you like. And that’s clearly the discussion we’re beginning to have around some aspects of technology at the moment.

Ben Pring:

The next book that we’ve got coming out later this year is called Monster: Taming the Technology That Rules Our Lives, Our Future, and Our Jobs. And again, it stems from this notion that we, you, most people on this call, we love technology. We’re the original nerds. We’re the original Trekkies. But we’ve got to make sure that this is used for good, this is used for a utilitarian good. And it isn’t the source of the worst aspects, our worst angels. That’s the concern I have at the moment.

James Kotecki:

Yeah, holding AI accountable is certainly a theme for my company. It sounds like a theme for you as well. I like to think conversations like this are part of it, where we continually bring this up, not just to people in technology, but to people outside the industry as well, to let folks know that it can’t just be a few nerds, as you say, who are making all these decisions. It has to be a society as a whole who’s making decisions about this kind of thing. And then, beyond the kind of moral work that we all have to do internally in ourselves, which is probably the hardest work that we have to do, to convince ourselves and others about what is right, and then to go do that. Do you see technological solutions for what is, at least partially, a technological problem of holding AI accountable?

Ben Pring:

Yeah, and this is an argument that the Randians, if you like, probably will suggest, is that ultimately, there will be commercial motivations for good solutions to bad problems, if you like, and we should let the free market work those things out. And certainly in an era where AI is increasingly sort of becoming, again, a central, critical success factor, which is a funny way to think about it, is in cyber security. The unwinnable and endless war between the black hats and the white hats. That’s going to be really predicated on who can leverage AI in the best fashion, for whichever side of that battle you want to be on. So for every malign act that a bad actor manifests with AI machine learning technology, the white hats got to counter and get further ahead.

Ben Pring:

As Aaron Levie famously said, the CEO of Box, that if you want a job for the next few years, work in tech; if you want a job for the rest of your lifetime, work in cyber security. That is the kind of battle that’s going to rage ahead of us. So clearly, each solution creates new problems, and then another solution solves that and creates another problem. We always sort of joke that the real mother of invention is things sucking. That’s the constant evolution of the human condition, and tech, obviously, is going to be completely central to that. But ultimately, it’s up to, as I say, people of good faith to make sure that we come out on the right side of this.

James Kotecki:

And is leaving it up to the market and people of good faith enough, or are there policy and regulatory solutions here, too, that need to be in place?

Ben Pring:

Yeah, I mean, one way I’ve talked about this for a while, James, is that we sort of spent the last 25 years laying the information superhighway, Al Gore’s information superhighway. We’ve sort of laid the tarmacadam of the Internet. And really, the next 25 years is going to be putting in place the stop signs and the yield signs and the traffic lights and the markings in the middle of the road so we can go 80 or 90 miles an hour or faster and still be safe. Because at the moment, the information superhighway doesn’t have any of that real regulation, material regulation. And again, that’s just how they run. GDPR and things like that, that’s just the first attempt to do that. Again, if you’re an optimist, if you’re a positive, and clearly we have to be, ultimately we will work this out.

Ben Pring:

If you’re a bit more cynical and skeptical, and the glass is half empty… I’m just always reminded of the inquisition of Mark Zuckerberg on the Hill, when he was asked what the business model was by one of the leading senators. “We sell ads.” If our legislators don’t really understand, again, in Zuboff’s conception, what something is worth, how do they make the rules for that? Again, that’s a big kind of philosophical question for our times, I think.

James Kotecki:

Well, thank you for posing it, and thank you for ending on a kind of a quasi-optimistic note at least. Ben Pring is the director of Cognizant’s Center for the Future of Work, and I really appreciate you being here, ben.

Ben Pring:

Great to talk to you, James.

James Kotecki:

And I’d like to thank everybody for watching.

Image for post
Image for post

Originally published at https://infiniaml.com on June 9, 2020.

Machine Meets World from Infinia ML

Weekly Interviews with AI Leaders

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store