The Public Policy Implications of Artificial Intelligence

Kim-Mai Cutler
Initialized Capital
13 min readDec 9, 2016

Jack Clark and I are both lapsed technology journalists, and he writes one of my favorite new newsletters of this year, Import AI, which summarizes major research, hires and products in the space.

He now works for OpenAI, alongside a team of researchers, where he handles policy, communications and partnerships. OpenAI is an AI research lab set up by former Stripe CTO Greg Brockman, Ilya Sutskever, Elon Musk, and Sam Altman. Its mission is to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible.

Q: Before you joined OpenAI, you were a journalist — like me. In fact, you called yourself the world’s first and “only neural network reporter” while you were at Bloomberg. What made you decide to cross over?

I think there are three things that are going to affect the world in incredibly significant ways over the next decade and they are 1) Climate change 2) CRISPR and 3) artificial intelligence. I wanted to work in one of those and be helpful.

Because of my background, AI made the most sense. Along with conducting fundamental research, OpenAI can also help increase the level of knowledge that’s available on how to use, regulate and evaluate this technology.

Q: So let’s talk about what’s happening in the field. What should we pay attention to at NIPS (the Neural Information Processing Systems conference) this week?

NIPS is probably the single largest AI conference in the field and it’s happening in Barcelona right now. There’s a joke among researchers that NIPS is where people get together to discuss papers that came out four months ago. That’s because the paper deadline was then, and the pace of modern AI research is so fast that much of the industry has subsequently moved onto new techniques and new papers.

The real value is all the researchers talking to each other about their next set of papers. OpenAI is presenting an idea on how you can learn language and reason, for example.

Q: What are the major themes you’re seeing this year?

Learning to learn. Current production AI systems are all about we can train a specific piece of software to learn a specific task. The new frontier, which includes the RL² paper from OpenAI, as well as work from Google DeepMind, is whether we can teach a piece of software to learn how to learn to solve problems. So instead of giving it a specific world and a specific problem to solve in that world, can we give a set of worlds and get the agent to learn to improvise solutions in all of them? This kind of meta-learning is going to be a huge theme in the coming years.

Another theme is about whether we can learn with smaller amounts of data, because we know that today’s AI systems are wildly inefficient in terms of their usage of data. For example, a young child can see a chicken once, and if you then ask them to draw a chicken, they’ll typically be able to. The child has a representation of a chicken from one sighting, and is able to abstract that into a drawing.

In AI, we’re trying to build systems that are capable of this, and we call these ‘one shot learning’ systems. The idea is developing systems that can learn useful stuff from a handful or even a single example.

The third major theme, and this relates to OpenAI and Google DeepMind, is about environments. Can we create worlds in which we train AI, and can we easily introduce our AIs into these worlds? OpenAI Universe, which we launched this week, has 1000 different environments. Facebook recently revealed TorchCraft, which is an environment related to Starcraft. Then there’s Deepmind Lab, which provides a 3D environment with a large number of permutations.

The idea behind all of these environments is that we are no longer asking AI to deal with static categorization problems. We are teaching AI to act in the world, and so the data needs to be dynamic rather than static. Can you create worlds where you have enough similar characteristics for the machine to be able to transfer learnings from the one to the other? That’s a big question. A second question is whether you can create simulations of the world that are good enough you can then transfer your insights into reality. That’s something we and others are working on.

This is important because if you want smart robots that can help you in the world, you have to find a way to train them in simulators because real life runs very slowly, but you can run a simulation thousands of times faster.

Today, modern AI systems help us in the digital world. They help us categorize things like images in digital systems, and projects like World of Bits within Universe will let us build systems to assist us as we use software. But the big, long-term goal is for AI to help us in the physical world. Just recently, Jason Furman, who heads the Council of Economic Advisers, has said AI can’t get here soon enough because we have an aging population. You need AI to not only increase your productivity, but also to provide assistance with elder care and to do stuff in the physical world.

Q: But there’s the counter-argument — which I almost don’t even need to say because you know what it is — and that’s about labor instability and safety.

Technology increases productivity, but there’s no rule that says it must increase equality. In fact, there’s lots of evidence that technology exacerbates inequality.

There are two points here. One is that OpenAI seeks to distribute this technology as widely and evenly as possible so more people can access this technology on a personal level, or for their startups. If this technology is widely shared then that might help counter some of the economic stratification effects.

The second is that there needs to be a political or a regulatory response. We need to have a national and an international conversation about redistribution, about safety nets, about measuring this technology and correctly anticipating its arrival. People are aware of this.

The Bank of England governor Mark Carney gave a speech this week called “The Spectre of Monetarism.” He talked about how globalization and technology has not made things better for people in certain countries. Certainly, aggregate global wealth has increased. If you’re in India or China, your world is better. If you’re middle class in certain countries, you may not have done well in spite of this. As the governor of the Bank of England, he talks about how this suggests that there may be a need for redistribution. This would’ve been unthinkable a decade ago — to have a governor of a central bank in a capitalist country say these things. These conversations are starting to happen.

There is a lot of persuasive evidence that mobility is decreasing. People are changing jobs less frequently. Technology is creating systems that arguably reduce dynamism. There is a very clear distinction between people that drive for Uber and the people that build Uber’s software. There’s a hard divide between your pool of contractors and casual laborers and those designing the platforms you are plugging these people into. These business models are enabled by technology, so whether or not [Branko Milanovic’s] Elephant graph is correct, there is this underlying sense of anxiety from people about mobility — how they can shift in a changing economy.

This suggests we do need to invest in lifelong learning. In this country and in many others, we do not provide good funding systems to help people learn if they’re, say, 35 years old. That needs to change. How can we help middle-aged or mid-career people switch jobs?

The question for humans is similar to the question for AI. Can we learn how to learn?

We need to re-think education so that its goal is teaching people to learn how to learn, rather than teaching people to learn specific skills.

Q: You have this rule of thumb about whether a job will be automated or not. And that’s whether you can collect 10,000 to 100,000 times as much data on a job as a human worker would generate during the entire course of their professional life.

I am pulling numbers out of air, but here’s the principle. Say I’m a radiologist and I make decisions about, let’s say, 8,000 cases in the course of a professional life. If you can generate hundreds of thousands or millions of examples of decisions on these cases, you can probably build an AI system that has equivalent or better performance than that radiologist. Maybe there will be a small percentage of cases that require very, very fine, discriminative abilities.

But any job that is heavily instrumented, like e-discovery or insurance actuaries or customer service, is also built to generate the data that we would need to train an AI system that could do it instead. This is why people are talking about the automation of white collar jobs. We now have a methodology to automate people in these roles. What this means is that if I have a company, I may not fire people — companies tend to try to minimize firings. But I may dramatically slow the rate at which I hire new people and instead invest in automation. Ultimately, this leads to fewer job opportunities in the long term in these areas. The open question is whether the AI also creates new occupations and jobs by making some things much cheaper than before. That’s still an open question.

Q: Can you speak about the power dynamic that advances in artificial intelligence encourage between incumbent companies and startups? You know, that it reinforces or entrenches the advantages that the very biggest companies accumulated when they won the land grab around consumer Internet and mobile in the 1990s and 2000s?

This is why OpenAI exists. We want to push AI forward and ensure the benefits are distributed widely and evenly.

Here’s what AI requires. AI requires data. Data is becoming a commodity, so we can scratch that from the competitive advantage list. AI requires compute power. That is becoming a commodity. Amazon, Microsoft and Google are all competing on cloud services. You need funding to pay for these computers, but that’s usually doable.

The third ingredient is a bit subtle. You need to have sufficiently complex infrastructure where you can take data and live data and turn it into live insights and apply compute power to it. You need to build systems to run very large, demanding jobs at scale, and to do this in an easy-to-use way so your researchers can conduct as many experiments as they desire. These parts are not commoditized and when you move into AI systems that require larger and larger models, the expertise required to make the infrastructure grows. It doesn’t diminish. We published tools recently to let you run thousands of GPU instances and that was extremely non-trivial work to do.

There is a tendency in AI toward centralization and winners getting richer or extracting more benefits from insights. Keeping the research open can help with this, as can making it easier for people to access computational systems equivalent in complexity and capability to those fielded by the major players.

But there are also techniques that can also be developed that could short circuit this dynamic, like if we could learn incredibly efficiently, or transfer from simulators that aren’t that good.

We also don’t know what we don’t know. The field is moving very rapidly, and part of the motivation of these companies in putting so many resources in as possible is that they see the basic value of research, and how quickly it can be turned into economic value. There are breakthroughs lurking out there, in the possibility space of ideas, that we might stumble on which could change everything. We just don’t know.

Q: What do you think about the loss or hemorrhaging of researchers from universities into the major companies?

It’s because there isn’t enough funding for basic research. To me, it’s a very American phenomenon. If we don’t fund more research, we’ll keep losing people to non-academic places.

It is like what Andrew Moore, who is the dean of Carnegie Mellon’s computer science school, said in Senate testimony last week. You’ll eat your seed corn. You aren’t going to get as many breakthroughs because you’re pulling people out of the breakthrough factories.

Fortunately, good researchers see themselves as scientists. Scientists don’t like to lose interaction with other scientists. So that’s led to this phenomenon of industrial labs publishing vast amounts of research and maintaining good links with the academic community. But this kind of openness is not guaranteed, it’s more like an extremely fortunate accident.

Q: How do you see the U.S.’s international competitiveness in this field?

American companies hired a bunch of scientists who worked in some of the Canadian institutions that birthed a lot this obscure technology while it wasn’t in vogue over the past couple decades. The American education system is also chock-full of international talent, and that’s benefited US companies as well.

American-owned companies are caught in this competitive dynamic, where openness allows them to be successful in propagating their research, and the more they propagate their research the more they are able to outsource part of their R&D to the wider research community. This dynamic seems to be giving us a boost as a nation in this field. Many of the top techniques are being invented by institutions or companies quite closely aligned to the US.

Internationally, we are seeing people realize how strategic AI can be. So AI funding is going to become a way to exert power internationally, and to affect the competitiveness of a nation. Japan is going to build a supercomputer next year, which is entirely for AI. It’s going to be 130 petaflops, versus 90 petaflops for China’s current world-leading system. This is going to be a major resource for Japanese businesses. Japan has realized that though it can build phenomenal hardware, it does not have as much of a software competency and is trying to correct for that. Similarly, South Korea invested nearly $1 billion in AI after AlphaGo. China has made smarter robotics a key point within its 13th economic plan. The 12th economic plan pushed for greater investment in Chinese-designed semiconductors and the payoff was that the world’s fastest supercomputer, the 90 petaflop one, runs partially on Chinese-designed chips. So we know they’re serious.

The amount that the U.S. Government has invested in unclassified AI was $1.1B last year. That’s it. By other measures, U.S. companies invested $8B in AI research. This suggests we can improve our international competitiveness in this area by injecting more money into the system at the level of basic research. There are also certain areas of AI research, like safety or security or ethics, where it’s likely that the government should directly fund and propagate techniques, rather than industry.

America has a chance, through its openness, to define and shape the values of this technology.

Q: What are your hopes and fears about the incoming Trump administration and how it will approach artificial intelligence from a policy perspective?

I can express my own hope. We know that AI is strategic. We know that it generates economic value. We can see this in basic research done by private enterprises. My hope is that we invest more money into the underlying research here, strengthen the American university system and equip them to do more research because it seems likely to create tremendous value. And we also pay attention to diversity and do more to stop women and other underrepresented people from falling out of the technology skills pipeline.

We also know that AI will lead to some aspects of major social change and has the potential to increase inequality. So if you’re not inventing this stuff, you’re not going to be in as good a position to understand how it will influence your nation. Instead it will be invented somewhere else and foreign companies we start selling services in your country that cause tremendous economic effects. My intuition is you’d prefer to invent the whirlwind, than have it appear unexpectedly from abroad.

My fear is that if we don’t increase our funding or if we try to be closed about our research, this would probably reduce our strategic advantages and it would lead the center of AI development to move elsewhere, which would be a shame.

Q: What are your thoughts on the various frameworks that companies like Amazon, Facebook and Google have released or sponsored like MXNet and TensorFlow?

There are frameworks like TensorFlow or Caffe, which are the language in which you create AI, and then there are environments in which AI behaves, and this would be like Universe, TorchCraft and DeepMind Lab.

Some are easier to use and some require more specialist knowledge. There’s maybe 10 to 20 of them, which is not sustainable. Think of them like operating systems.

Q: So who wins?

There’s a good chance that TensorFlow will succeed. Another contender is Torch, because Facebook and NYU are spending huge amounts of resources on it. An additional one would be Microsoft’s CNTK.

What you’ll notice about all of these is that they have corporate sponsors. If you own the successful framework, you can probably make money from it, not by charging for it, but by earning revenue from services around it.

It’s less clear what role the academic frameworks will play. Over the next year, you’ll see a competition among these entities to see who can get the most developers.

Q: How would you characterize the major companies’ strategies in AI investment over the past year?

IBM and Amazon are thinking very carefully about commercial models around AI. How do we sell it? What do we sell it as? How does it interface with traditional things in IT? They have participated less in some of the research.

Google has opted to be very open and is also open to thinking about economic models, in addition to conducting quite a bit of research. Google has started to sell AI-infused cloud services, whereas before the company has mostly used AI to improve its internal infrastructure behind search, speech, and Android.

Microsoft is kind of split between Google, and Amazon and IBM’s model. Satya Nadella appears to be re-orienting the company around cloud services, of which AI will be a huge component.

Q: And Facebook?

Facebook conducts good open research with a significant amount of ambition. It does not appear to be selling anything at all, or have plans to deploy cloud services. It’s all to the benefit of the Facebook infrastructure stack. Its group is a bit smaller than Google’s, as far as I’m aware.

Apple has not interfaced with the research community at all until it hired Carnegie Mellon’s Ruslan Salakhutdinov. He gave a talk at NIPS, including slides in Apple’s fonts, that said the company will publish research. But we have not seen a paper yet.

I’ll believe it when I see it!

--

--

Kim-Mai Cutler
Initialized Capital

Partner at Initialized Capital. Contributor at Techcrunch. When life hands me lemons, I make tarte au citron.