Fast Forward: A Conversation With AI Expert & Entrepreneur Vivienne Ming
If you are doing the same job you were doing a year ago, Vivienne Ming is going to replace you with AI.
By Dan Costa
Dan Costa: At CES, you gave a presentation. Let’s talk about that, let’s talk about artificial intelligence, and take it from there.
Vivienne Ming: Sure. There seem to be two big conversations at CES, which are AI and VR, and there’s a lot of reasons for it. My panel, which had some bigwigs from IBM and Philips and others, all they’re talking about [are] why and what are the implications. I’ll be blunt—there’s only so much that the head of Watson or the head of Accenture’s technology division can do on a stage besides say hey, this is great because …
Dan Costa: They’ve all got products.
Exactly. I think they’re very honest about it, but I think one of the missing ingredients which we try to bring to this panel is what are the big scale implications? Why is everyone so excited? Is it real? It is. Should you be excited? You should, but the changes are probably not tomorrow. Also, these changes have real consequences.
When we talk about AI, a lot of people think “Alexa.” But when you talk about AI, what do you mean?
I’m an AI snob. In fact, I’m going to go so far [as to say], ‘We are going to redefine AI,’ and we did it on stage. But… these are just voice interfaces for database search. There’s some neat stuff behind the scenes, but it’s automation. It’s great automation, I’m not knocking it. AI to me, the most basic and tangible would be the face recognition and images that Facebook and Google can do. AI is some aspects of self-driving cars. Not everyone, but a lot of them.
In a sense, Andrew Ng, who is the Chief Scientist at Baidu, put it really well. AI is anything that feels uniquely human, but we can do in maybe in a second to five seconds. Now we can build… deep neural networks that can do anything you and I can do on that kind of cognitive scale. If I can, for example, look at a resumé and think after about five seconds, ‘Ah, maybe I won’t hire this person. I can build an AI to do that different and better.’
Yeah, and you’ve done that.
We have. We’ve done that work at my previous company, Gild, where I was a chief scientist. I am the chief science advisor at a really cool company in Chicago called ShiftGig. I advise a lot of companies in the HR space. There are some amazing potential technologies in this space and understanding what can be done.
Again, think about a complex judgment. Do I know this person? Are they happy, are they sad? Should I hire them? At least those snap judgments. We can really automate that sort of thing nowadays. There are some implications about that, but that’s what I’m getting at with AI. What’s interesting is just as hard as it can be for me to tell you why I recognize you, why I would hire this person, it turns out we needed these deep neural networks that are almost just as complicated to understand to solve those problems.
Let’s break into that one example of evaluating someone for a job. I think it’d be interesting to see how that works. Does it just scan the resumé and look for keywords? It’s a little more sophisticated than that.
Yeah, there are a lot of different approaches you can take and there are many different companies that have worked hard on this problem, some internally such as Google. In our case we were focused on what’s called sourcing, which is to say before someone ever enters your hiring pipeline should I even pay attention to them? The interesting thing, and what does make AI exciting, is that we are able to turn that on its head. Instead of your sourcers having to actively go out and look for people, we could just send our AIs out and look at everybody. Then say to your recruiter, ‘these are the 100 people you should consider.’ We can’t tell you yet who to hire, though people are working on that.
We can narrow this down and say these are the high probability hires for you and we brought it to you. That’s part of what’s interesting. Not about replacing what people can do, except where it’s really repetitive. Who would actually want to look through all the photos on Facebook? I’m a neuroscientist by training, we have this term ‘promiscuous.’ Eye movements are promiscuous because they don’t cost me very much. I can look around really easily, whereas working around and touching everything, that’s costly. We made sourcing promiscuous. We can consider when a company hired Gild they essentially were considering one of 122 million people for every job they had.
Yeah, it’s applying the scale. It’s human judgment, but applied at scale which opens up all sorts of opportunity.
I think that’s where we came on the panel as a conclusion and where all of my experience in this domain has taken me is we need to redefine AI from artificial intelligence to augmented intelligence. You need to build complementary tools that do what humans really don’t want to or can’t do well vastly better. In some cases these are high-value jobs like radiology, but who wants to be the first person to die from a cancer that an AI could have diagnosed? Instead, you want to keep a human on the job.
Right now AIs can’t manage the whole human experience of healthcare. Having an AI-powered GP who I could go in in one visit, have every test effectively done by that doctor right there and then together we can make a decision on my treatment plan, that would be an amazing experience.
I’m skeptical whether that experience is going to be delivered, but that’s the value proposition of AI.
Costa: One of the interesting stories when Watson came out and Deep Blue and it started competing in chess competitions. It finally beat the human chess masters, but then there was another wave of experimentation where a chess master paired with a computer could actually do even better than either one alone, which I thought was a great story and possibly a pathway forward.
This has been done in healthcare as well. In the specific case of epidemiological diagnostics, what they found is the best combination was essentially the doctor as another input to the AI, but the AI did make the final decision. Now I think treatments, those are cut and dry. Does this person have this disease or not? When you expand that to then ‘how do we treat it?’, what does that mean for someone who’s young and single versus older with a big family. That’s something that we don’t have AI to deal with right now. That is not a one-second judgment.
There truly is power. I don’t think people should be bent over in fields picking strawberries. I don’t think [miners] should have to go a mile under the ground to mine coal. We can build systems to do that. I don’t mean I can imagine. People are building those technologies right now. They can be deployed and they are better, more efficient, and less costly than a human solution. Then we got to think, what am I going to do with all of those people?
Yeah, that was my next question which is that we’re both techno-optimists, I think, but people ask, ‘aren’t all these people going to lose their jobs?’ I tell them yeah, if you’re driving a vehicle on American roads right now you may not have a job in 25 years, you may not have a job in 10 years.
Yeah, and it goes bigger than that. One of the misconceptions among some very smart people, Jared Bernstein recently had an article saying now look, there haven’t been any productivity gains in the US economy, so it can’t be automation as a more general discussion of AI. It was a well thought out piece, but his core assumption was automation, it will only happen at the low end. We’re talking about automating factory work, automating agriculture. Actually we’re going to see this automation and this displacement top to bottom.
That’s part of what I actually … I’m genuinely optimistic about what AI can do. I build it, I have built systems for diabetes and for bipolar disorder, for finding jobs, for education. I’m truly hooked, I am part of the problem. Did I believe in its potential, does it make the other problems go away? We tend to work under this very optimistic assumption that yes, people are going to lose jobs and they’re going to be financial analysts and farm workers and doctors and long-haul truckers.
The majority of articles written today are written by narrative science and a spinoff of writers. It’s not very interesting writing, but it does look like someone wrote it.
It communicates the facts and if that’s all that you’re doing then that is not a sustainable career. You have to have value.
I have this notorious quote now that if you’re doing the same job today you were doing a week ago, someone like me is going to come along and automate that job.
It’s making you very popular.
Yes, but the thing is, the reason why I’m neither a pessimist nor a utopianist is because I think there are things we can do to change the direction of this story. Let’s take a very specific example that I brought up during the session. Some friends of mine have a startup called Pacific Labs. It’s really cool, their original vision which they’ve moved slightly away from is hardware. It’s like a wand, a sonogram with built-in AI, deep neural networks right there. You run it over you and it’s doing diagnostics in real time as it’s running over you.
It’s a tri-quarter.
Basically, yeah. It’s exactly that space and I think that is incredibly cool and there are two possibilities. One is I go see my GP and they do that and they get all the information. They’ve mastered other tools. They actually have a pretty good understanding of what’s going on inside that wand, the tri-quarter. They know as much data science as they do biological science and then we together come up with this treatment plan.
The flip side is essentially I go to Jiffy Lube and a pair of legs carries that wand around. It feeds into a computer which spits out a treatment plan and then that’s it. There’s no flexibility and there’s no … The way I pose this is in one case we have 120 percent of the cost, but 200 percent of the value. In another case we only have 80 percent of the value, but it’s only 20 percent of the cost. Those kinds of labor cost differences make it hard for me to believe that we won’t push towards the Jiffy Lube model. That’s a real fear of mine is a massive downscaling. I think people will have jobs, but they are not going to be jobs that people want. It’s going to be sophisticated versions of the service industry, opening the door for people, running an AI and then sending them on their way.
The vision that AI will allow creative people to do amazing things and solve new problems that have never been addressable before is legitimate. It’s not going to magically happen because what that story misses isn’t the AI, it misses the human. This is the simple truth that we aren’t building people if you don’t mind that metaphor. We aren’t building people to be creative problem solvers, to be adaptive. We’re building them to pull levers, sometimes very complex, cognitive levers, but still it’s lever pulling. Those people are not going to be ready for an AI-enabled job.
Costa: I liken it to we had these transformations in the economy before, we had the Industrial Revolution. We’ve lost all those farming jobs. The number, we used to use 80 percent of the population to feed ourselves and now we get by with, I don’t know, less than 10.
Yeah, it’s tiny although in some places such as in Sub-Saharan Africa it’s a massive part of the economy.
As we make this transition we’re going to need a similar type of restructuring to deal with this automation revolution.
I’m going to get a little political. I’m going to say there is a big split between people that see the world as fundamentally dynamic and uncertain and people that see the world as static and really have moral associations with them. For someone like me, a scientist, a West Coaster and so forth, to me it’s like, wow, if my startup doesn’t work it’s my fault. I need to adapt, I need to change, I need to fix things. I think in a lot of places they see … Wait, this isn’t right. Why are there no coal-mining jobs? The idea that we don’t need them anymore is just not part of that moral equation. It is wrong that these have disappeared.
The reason I bring it up in this AI discussion is I think these are actually very general human phenomenon. We can talk about globalization, AI, automation in general. They’d all have the same story behind them which is I don’t need you anymore, not what you’re currently bringing, but they all brought wonderful things also in some sense. Even globalization which people tend to look at scan set gets me a $10 shirt which I could not get out of an American worker. If I’m not willing to pay more than $10 I shouldn’t be so upset that my neighbor doesn’t have a job anymore.
These are genuine, complex problems that I think have solutions, but they’re solutions we have to actively decide to take a step towards. We need craftsmen, we don’t need tools with just legs carrying them around. We need adaptive, creative problem solvers that can then take these amazing technologies and do something amazingly creative with them.
How do we get to that point where there’s going to be a creative class that specializes in this on the coasts, but how do we … What steps do we need to take in order to … so that we can have a workforce that is gainfully employed and working in ways that support themselves.
One thing we need to do is move education away from the tool side of the equation. Tools being all the skills and knowledge that I can give you a test and say do you know how to do this. It’s actually my ability to understand you, to control my emotion, to assess how the rest of my day will go. Those are the things that … Here’s a big who knew. Those were the things that have always been predictive of life outcomes. AI is just accelerating this process. We will still need as you call it this creative class. We will need research engineers and we will need research doctors and artists and all sorts of people whose job is explicitly to push into the unknown because if they’re not pushing into the unknown why aren’t we automating it?
I think the problem … Let me put it this way, if you were thinking about structuring your company for the future you’ve got two solutions. Start thinking about how hard it is to compete for top talent right now and then imagine that that’s the only talent you will need. What is that going to do to the scorched earth talent markets that are out there today? It’s bad in Silicon Valley, but wait till this comes onboard where that minimal scale level if you will, minimum craftsmanship goes up and up and up.
That’s one side. If we don’t deal with figuring out how to… increase the size of the creative class then it will become a crazy competition for talent, but the flip side is how do we actually do this and it’s stop teaching the tools. Why are we teaching programming in classes now? Believe me, I love math, I love programming. In terms of the cognitive impacts I think about the world fundamentally differently because I’m the kind of person that uses orthogonal in casual conversation. It changes how I think about the world to know something about these things.
There’s an implicit promise, hey, if you learn how to program in high school 10 years from now there’s going to be a programming job that pays $120,000 a year. No, there won’t be. I don’t think there’ll be such jobs for anybody because again I’ve got some friends that are building a deep neural network that can program apps from scratch. It may not be a mature technology yet, but boy, it will be by the time those people hit the job market.
We need to stop with the focus on tools and think how do we build people that have strong cognitive skills, strong problem solving, metacognition, social and emotional intelligence. These are the things that are actually valuable. Then it turns out once I have those, once I’ve got a bunch of craftsmen I can actually teach you all sorts of tools and then augment it by AIs that can quickly and adaptively change.
One day I’m doing this test, I am an engineer and I have an AI that can quickly debug my program and that seems powerful today. Well, three years from now I don’t need to be writing programs anymore. Something I’ve learned about how to solve problems in programmatic ways is still really useful. Now essentially I’m like a program manager directing a bunch of AIs to create programs. I understand problem solving, I understand how these programs are supposed to make the world better for my customers. I can’t do that retraining with someone that isn’t a craftsman.
Where do we start because I think that when you look at the educational system in the U.S. it is about tools. We’re not teaching typing anymore, we’re teaching programming and we think that’s a huge step up and it is in a lot of ways, but it’s still tool-driven and I think in part because we can measure that.
Oh, yeah, that’s a big part of it. We can say another thing and this is a bit of an aside. I think one of the reasons why VR and AI is so big and so popular here and this is someone who appreciates both of them is because it’s a whole bunch of new products that we can sell. It’s not … I’m displacing the television and selling a whole new line of product and the AI, my God, IBM and GE and Amazon and Google they want to own that platform. They’re not talking about that, but of course they do. That is huge to own an AI behind everything, so that’s a little bit of an aside.
What do we do with people? I do a lot of focus on families. There’s this wonderful work, a guy named Heckman at the University of Chicago, a Nobel Prize winner. He just released a report. He looked at what happened if you did high-quality childcare starting at eight weeks through five years of age. You may not think this has anything to do with AI, but remember I want a world of augmented intelligence, not artificial.
What he found was … He then tracked those kids to age 30, a profound change in their life outcomes. These were poor, single moms. It was transformative for them, it was transformative for the kids. His estimate was … The return on the investment was 14 percent every year. I wrote an op-ed actually about a year and a half ago saying if kids were bonds they’d be the backbone of the world economy. They return better than any financial instrument out there.
That started with Head Start. Experimenting and now you’re going all the way back to eight weeks and you still see returns.
The cognitive is largely set by ages say five to eight. If you don’t intervene then, that’s it. All of the raw cognitive ability of a child is pretty much fixed. There’s some great recent papers on that that we can go on and on, at least I could. Your chance to go in and intervene then … Then once you’ve made a change there in a child being able to go to the next stage if you want to think of it this way and focus on their emotional development and then their social development and then the metacognitive. We do all … Not the cognitive, everything else we do throughout the entirety of our lives.
Here’s the exciting point. Not only am I talking about preparing kids to actually be making use of these amazing new technologies, but these amazing new technologies can then close the loop and prepare the kids for them. Of course, that’s a lot of what I do at my company, but we’re not the only ones. To think about what it means for essentially these artificial systems to say let’s create the kind of person that will do amazing things with me and these are long-term …
One of the reasons this doesn’t need to be, but is largely in the hands of government is just because we’re talking about a 20 or 30-year investment before it starts paying off. Boy, do I wish companies would start stepping up and saying, you know what, this is in our self-interest. We need this technology, we need this talent and we’ve actually found that productivity inside of companies actually increases when you provide these things. Not from the kids because you make it easy to be a parent, people bring more to the job.
I don’t want to downplay it. I am concerned about AI’s impact in the ways I’m concerned about global warming. I’m not concerned about big, evil AIs taking over the world. I think that’s a bit farcical, but the social stress and uncertainty that comes from this displacement, quite frankly, I think we already are seeing in the elections. Not just here, but around the western world and boy, you think it’s bad here, wait till this starts to play out at real scale here, Africa, Asia, massive numbers of people with nothing but time on their hands which I don’t think anybody wants.
No, it’s a formula for … We’ve seen what happens when especially young men have too much time on their hands and not a lot of prospects, so is that the … Is there anything that you are particularly afraid of in terms of the future? There are a lot of people that are just afraid of the future in general, but what concerns you most in terms of …
I’ve got this weird dualistic thing, I love the technology. We have to build it. I am involved in a project to predict manic episodes in bipolar sufferers, but it changed their life. Twenty-five percent of those people go on to kill themselves when they have severe attacks and you can intervene on that and prevent that from happening. You can imagine the same thing in major depression, early predictions of Alzheimer’s or Autism at the two ends of the scale. You have to do those. Those are moral good.
Why should people die in car accidents, why should people have cancers that go undiagnosed? AI can be amazing, but we can’t lose focus on the social institutions, the cultural institutions that need to evolve along with them. My real concern is we see a world where essentially AI is simply used to reduce labor costs down to nothing. All of the revenue, it’s not so different than some of the things we’re seeing already.
This is a wonderful story for Jeff and Larry and Mark and these other people that stand to benefit quite directly from this. Everyone else bereft of a purpose, bereft of a sense of control over their life, I don’t think that a universal living wage solves that problem, nor do I think 20 years from now a bunch of wealthy people are going to be so excited about paying everyone else to do nothing. I’m genuinely concerned about not social costs in a soft way.
I’ve said this frequently, so I’ll say it again, acknowledging that it’s a bit hyperbolic, but the two social institutions in Africa that will benefit the most from massive displacement are Al-Shabaab and Boko Haram. This isn’t a statement about religion. This is just guys with time on their hands looking for a purpose. They will find it here in the worst possible ways, they’ll find it there, they’ll find it everywhere. We need to build people that can create that purpose for themselves. Those are the kinds of people that are going to flourish in this new world.
You said a universal minimum wage wouldn’t solve all the problems, but could it be part of the solution? Do you think it should be part of the solution?
It certainly could be. Listen, I’m fairly progressive in my politics although I’m very much a prove to me that it works person. I don’t think people should … Let’s take the idealized version. I don’t think people should not be able to be doctors simply because they don’t have enough to eat or they need to take a crummy job out of med school and take a crummy job because they got to pay their rent or take care of a sick relative. Yeah, absolutely. That release from want … Part of the moral good of AI is …
I have this friend who built a robot. It can walk through fields and visually distinguish the crops from the weeds. Kill the weeds with millimeter precision fertilizer spray, organic farming, faster, more efficient than humans can do it. That is a moral good, but what do you do with all the farm workers. I think I have some pretty legitimate fears, but here’s the exciting part. Unlike the global warming scenario where this is all about mitigating a terrible catastrophe that might be in our future, my might has a very delicate way of putting that.
In this case there’s an upside, there’s a massive upside. If we just took the world right now today and created a world, maybe not everyone’s part of the creative class, not everyone’s a craftsman, but right now I’m just going to make up a number, it’s 1%. What if it were 10%, what if it was 20? That would be transformative.
Yeah, and what a lot of people don’t realize about all these companies is Facebook is huge, Microsoft’s very large, but there are orders of magnitude smaller than the industrial giants, that we’re actually building things and shipping things. There’s great innovation, there’s great wealth that’s being generated, but there’s not the same levels of employment.
I have to be honest, I prefer to use in my case Hipmunk, but in using Hipmunk I’m effectively putting … This has already happened, putting travel agents out of business. Now what’s really gone on there isn’t that Hipmunk provides me a better service. In some ways if I could have afforded a travel agent it’s performing a worse service. I’m not denigrating them, but it’s that exact example. It’s 80% of the value, but boy, it’s not even 20 percent of the cost. It’s a tiny fraction of a percent of the cost. Why wouldn’t I do that? I’m also wildly antisocial and misanthropic, so then I don’t even have to interact with people, wonderful.
I hate talking on the phone.
Oh, God, please don’t call me.
No, but this is the nature of the tradeoff which is they’ve pushed the margins of that industry down to thin, thin layers where there’s really only enough of a market for five to six players to have a meaningful revenue stream. They’ve done it by forcing all the people out. Everyone likes to say hey, that’s wonderful. Who wants to be a travel agent? Wouldn’t you like to do something creative? Well, sure, if that creative thing actually pays my rent. If I haven’t actually committed my life to being a pretty decent travel agent, pull that complex, cognitive lever every day and now suddenly I’m supposed to retrain and become something completely different and have a similar salary in life.
No, we’re not going to get the same salary. I’m going to get 80 percent of that salary. The next time that transition happens I’m going to get 80 percent of that salary. We have a massive downscaling, a tightening of … I’m not pounding on my fists like Bernie Sanders here, but a real tightening of the economy with respect to everyone doing the labor. That’s where I think we get the true overpromise of AI right now. Isn’t can it diagnose cancer or can it drive a car, the true overpromise of AI right now is no matter what happens we will all be better off. I think that’s really naïve.
It depends who we … defining who we are.
They will be better off. Mark and Larry and all of them will be better off. Will I? Probably. Will my kids? Boy, I don’t know. Will 99 percent of the global population?… Here’s the heart of this story. If people were just waiting to be freed of the burdens of labor, to go be scientists and artists, guess what, they’d already be doing it. You don’t get paid much if anything to be either of those. It’s not like people are passing up on being artists because they couldn’t go to the right school or something like that. It’s because there’s no point in making that effort in your life if you don’t think it’s going to pay off. I’d love for AI to be and this is the augmented kind, to be that transformative technology that truly makes people’s lives pay off, but it won’t happen by magic.
Yeah, and I don’t see any organized response from any of our institutions. I think we’re just starting to wrap our minds around the problem and it’s starting to sink in that this is happening, but there’s really no organized response. I don’t know where that response would come from.
I think the White House noted some concerns in their recent report. Accenture had a report, McKinsey. They all acknowledged the problems. Then they say but, of course, during the industrial revolution, yada, yada, yada. We could go into that story if you want to, but is a story from 200 years ago, is that really the best story we’ve got? My response to that is yeah, but then you had a generation to recover and retrain.
Now it’s, I don’t need your job anymore, go retrain in six weeks and even by the time you come out six weeks later the job you retrained for, I don’t need you anymore. As I say people like me are building new AIs on the order of every two years, coming up with new, really conceptually different, disruptive AIs. It takes me 20 years to build a person. That is not math that is in the favor of social stability if we are not actively dealing with the problem.
Vivienne, it’s a great conversation. How can people follow you online, find out more about your work?
I have a couple of books coming out. One is called How to Robot-Proof Your Kids and the other is The Tax On Being Different. Follow me @NeuralTheory, and you’ll see when all those release dates are. If you want to learn more about our educational technology visit SocosLearning.com.
For more Fast Forward with Dan Costa, subscribe to the podcast. On iOS, download Apple’s Podcasts app, search for “Fast Forward” and subscribe. On Android, download the Stitcher Radio for Podcasts app via Google Play.
Originally published at //www.pcmag.com/article/351278/fast-forward-scientist-ai-expert-entrepreneur-vivienne-mi.