Vicarious Is The AI Company That Includes Zuckerberg, Bezos, Musk, And Thiel As Investors

Peter High
Metis Strategy
Published in
18 min readApr 11, 2016
  • This is interview #4 in Metis Strategy’s Series on Artificial Intelligence.
  • Links to other interviews in this series are located at the bottom of the article.
  • An unabridged audio version of this interview is available here:

Introduction

Vicarious has the mission to “build the next generation of artificial intelligence algorithms.” That said, its objectives are longer-term in nature. Vicarious has assembled a who’s who of technology legends as investors, including Jeff Bezos, Elon Musk, Peter Thiel, and Mark Zuckerberg. Co-founder, Scott Phoenix is clear that the biggest value Vicarious can contribute will be in the long-term, in the form of artificial general intelligence (AGI), or human-like intelligence. There will be plenty of value created in the interim in the form of what Phoenix refers to as the “exhaust” of the process.

Phoenix is a veteran entrepreneur, having served as CEO of Frogmetrics, which was a Y Combinator company in the class of 2008. He was also the Entrepreneur-in-Residence at Founders Fund, among other roles he has played. In this interview, Phoenix describes the goals of his 30 person organization, how he weighs the risks versus the rewards of artificial general intelligence, how AI may replace more jobs than it creates, new economic and social constructs that could ease the societal shift, Vicarious’s decision to prioritize social good over investor returns, and why more companies should do the same.

Interview

Peter High: You are the co-founder of Vicarious, a company that is within the artificial intelligence (AI) realm. I thought we could begin with a definition of AI. It is a term that is thrown around in a variety of ways and I would like to take have you unbundle it a little bit.

Scott Phoenix: Artificial intelligence is a really funny thing for a couple of reasons. One is the “moving goal posts phenomena,” which is as soon as something that was formerly called artificial intelligence is solved, it is no longer included the umbra of what is AI. Since it is such a funny term, you can apply it to almost any business or product or company that is developing anything. You could have a consumer gadget that has AI for making sure your windows are clean, or AI in your spam filter.

At Vicarious, we have a particular and specific definition of what we mean when we say AI, which is artificial general intelligence, or human-like intelligence. To put an even more specific frame around it, we say, “given the same sensory experiences that a human being has from birth to adulthood, we are trying to write a program that learns the same concepts and has the same abilities.” That is a specific thing, whereas artificial narrow intelligence (AI as it is commonly used today) can mean just a computer that does some stuff that is useful.

As we have machines that are able to do a lot of the processing that humans do today is there any worry that there are aspects of the way that we think or work that are going to change profoundly?

Let’s take a bigger historical lens on this. Look around at the world we live in, and put that in context of what people were doing just 300 or even 100 years ago. It is a totally different world that we live in now. So yes, the types of work that will be done by people in the future are going to be different from types of work done today, and that has always been true. Now the speed of change is certainly increasing, but just the idea that there are new types of jobs that are being created and other ones being destroyed is something that has always been with us.

You note that your organization is building a unified algorithmic architecture to achieve human level intelligence in vision, language and motor control. You are focused on visual perception problems like recognition, segmentation, seeing, parsing. Where do things stand now, nearly five and a half years into this journey, and what is your strategy for the foreseeable future?

Today, AI is commonly used as creating a mapping between inputs and outputs. You have lots of technologies — particularly neural networks — where you assist them with a lot of labeled data that say, “these are the good examples; these are the bad examples” of what you want to classify or make decisions based on. You feed that system a ton of data on big clustered computers or cloud servers and then you create a little black box that, when given some inputs, gives you some prediction of outputs. That is the existing world of state of the art machine learning. It is what powered [Google DeepMind] AlphaGo, it is what helps self-driving cars identify pedestrians. It is the mainstay of AI research.

That is great technology, but it is also different from what we are building at Vicarious. We are building something referred to in the field as ‘generative models’. We are not just interested in drawing the boundaries between the good things and the bad things, or the cars and the pedestrians. We are interested in knowing what actually makes the car a car. Could something with two wheels also be counted as a car? How do things move in the world? What kind of cause and effect relationships can you learn — much in the same way that a human is able to learn these more complex relationships.

This is important because existing systems are limited to drawing boundaries between a lot of examples that it has seen in the past. This is the Achilles heel of things like the Google self-driving cars. When there is a new scenario — and there are new scenarios every day when you are driving — it is hard for it to generalize to those new scenarios because it is trying to put these new scenarios into context of its day to day prior experiences. So without something like the generative model we are building at Vicarious, it is hard to solve those problems. That is the frontier of AI research that we are pushing.

You mentioned earlier that you are hoping to develop the power through your definition of artificial intelligence to recreate a human-like learning experience from birth to adulthood. How do you map out those stages? How do you choose areas to focus on?

Thankfully, there is tons of research on how the brain develops and the phases that infants go through and how they build their model of the world. So we can leverage fifty to sixty years of past literature in neuroscience, cognitive science and behavior. That has been helpful to us.

If you look at the mainstay techniques that are powering AlphaGo, they do draw their inspiration from neuroscience that was done back in the 1970′s. You can think about Vicarious as arbitrage between what we have learned about human development and about the human brain in the last thirty years, and what exists in mainstream AI.

You have talked about how your company is not focused on launching products in the short term, and that anything you do ship will be a side effect of the research being done. Additionally, you are a Flexible Purpose Corporation and you are choosing to pursue the maximization of social benefit as opposed to profit. Could you talk about the rationale behind that, and the longer term focus that it provides.

As a society, we have created a system that optimizes shareholder wealth, when we want a system that optimizes happiness, social good, or a productive society. That is not exactly the same thing as shareholder wealth, and because there is that difference between what we optimize and what we want, we get things like Enron, or things like people dumping horribly toxic chemicals into the water supplies because those are the types of actions that maximize shareholder wealth on some timescale, but are not good for society.

I like the movement towards these alternative entity uses, like social purpose corporation or the B corporation because it pulls into view, and into alignment and into integrity, the role of the corporation, and the role of having a productive market economy, and creating the kind of society that all of us would want to live in. The choice to make Vicarious a flexible or social purpose corporation is an obvious one. Every corporation should be serving that purpose. By making products you make the world into a more enjoyable and robust place where people have more leisure time and live longer, healthier lives. So that is the philosophical angle.

We are building a new technology with significant implications in the future of society and many different social systems that exist today. Since we are building something that is powerful and can be used in a lot of ways that are both good and bad or either good or bad, It is important that we ethically ground our efforts to do so in a legal framework that captures the guiding moral principles of our work.

You have built a team of thirty people. As you have now seen what motivates people to join your company, and you obviously have a means of attracting great talent, what sort of perspective has it given you?

I would say there is some amount of motivation towards the ethical value of creating a better world for all of humanity. There is some motivation for having personal reward, but the thing that motivates most people who work on this problem is the weight and the intricacy of the problem itself: understanding what it is that makes us tick as humans, and what computation is happening inside our conscious mind, and then writing a program that demonstrates that you do understand it. Having that program come to life is a captivating problem, and it is the kind of thing that brilliant people cannot help but think about. Being able to work on that problem with the twenty-nine other of most brilliant people in the field is its own reward. It is something I certainly feel thrilled to get out of bed every day and come to work and be around everyone else who is driving forward to push the frontier of what is possible in this area.

Obviously, this is a hot segment of technology right now. What do you think about competition within the space versus co-opetition or even just full-on cooperation with various organizations that might be somewhat like yours? On the flip side, what makes Vicarious different relative to other players that are approaching things from a similar point of view?

To take the big view on competition versus cooperation, society as a whole is under-investing on artificial general intelligence. It is the most important technology that we will ever build because it is also the last. If you can build something that can think fully like a human, then it can do all the things that we humans do today but significantly faster and better and cheaper. I feel great any time I hear that more thought and more resources are being put into “how do we create a positive Singularity”, and “how do we build the first intelligent computers.” That is something that is exciting to me, and that is what I am here to do. It is not “us versus them,” or a “one person wins it all” scenario. It is the kind of thing we need to prioritize this more, and I am glad to see more resources being applied to it.

Rather than talk about the AI frontier, we should talk about the AI frontiers. A lot of work is being done on improving the kind of existing neural network architectures that have been in vogue for the last four years or so. That work is fantastic and it needs to continue. But there many different ways of framing the problem “how are humans intelligent?” I would like to see pressure put on all of [the frontiers].

Vicarious’s particular approach, which is drawing inductive biases from neuroscience and cognitive science, and framing them into a grounded mathematical framework and then testing it on datasets to demonstrate its superiority is something I do not see a lot of other companies doing. I see many people in the for-profit startup area doing vertical applications: we are going to take deep learning and apply it to medical imaging, and then we are going to produce a better diagnosis for patients and charge them money to do that. This is not fundamental research into what makes humans think, the application of the technology to a vertical.

On the nonprofit research side, I see a couple of companies like OpenAI, Allen Institute for Artificial Intelligence, a bunch of the big research labs who are trying out some different techniques and trying to push out the frontiers, and those are exciting. Our particular take on what is the right strategy to build human-like AI is unique from a technology standpoint, and probably also from an organizational standpoint.

One of the ways in which you do compete is for talent. You have twenty-nine of the brightest minds in this space. I wonder how you have thought about staffing the organization. What sorts of skills are you particularly looking for? How have you thought about building out your organization as it has grown? Likewise, you talked about the research that you are leveraging from universities or other organizations. How do you think about that ecosystem that you are building?

For us, competition has not been a significant problem so far. We have had 1,100 people apply for those thirty spots, so most of the work has been on sorting out who are the best people for us to work with.

The other thing that is different about Vicarious is that because our strategy for solving these problems is unique among the landscape of other companies, people who want to do the kind of research that we are doing want to do it here. People who want to do the kind of research that some other group is doing — Allen Institute for Artificial Intelligence, for example — they go and do it there. It is not a cutthroat recruiting race as you might expect.

The people who care deeply about solving this problem are on the lookout for where are the other ones who are also working on the frontiers of what is possible. What groups have the resources to commit to pursuing that in a long way? We are the only startup that has the runway and the supporters and the investors and so on to pursue the big vision.

In October of 2013, your organization passed the Turing Test by breaking CAPTCHA, the system intended to distinguish human versus machine input for websites, often used for login or check out on sites. Since then, there have been a number of prominent organizations that have had this sort of “technology defeating humans” moment. Examples include IBM Watson beating the Jeopardy champions, or more recently, Google DeepMind’s AlphaGo defeating the world’s best Go player. What role do you feel these great events play in focusing the general public and investors on the opportunities here? Are we at the point now where it is enough of a given that more can be happening behind the scenes without the need for this bigger representation?

I do not think it is necessarily bad to have World Fair style demonstrations of a new technology that we are proud of and that we are excited about. It is always exciting to see a computer do something that it was not able to do before, whether it is play Go well or play Jeopardy or break CAPTCHAs. There is a problem in the media of misunderstanding what the technology actually is, what it is doing, what it is capable of, and what it means. Every time one of these events occurs — and Vicarious is as much a victim of this as DeepMind has been where you publish an accomplishment, and the press turns it into an event of grand proportion that far exceeds its technological relevance, or what you intended to present it as.

There can be a downside to these World Fair events, which is when people do not understand what is going on behind the scenes, or do not think clearly about what the event means. They can turn it into “oh we are imminently about to have XYZ happen”, or “these problems are bigger than they are.” That is a risk, but if we can have careful journalism and not get ahead of ourselves here, then it is always fun to see a computer do something new.

You have attracted an extraordinary group of investors of the likes ofElon Musk, Mark Zuckerberg, Peter Thiel, Jeff Bezos, among many others. How have you done so? What impact are they having on the direction of the strategy of the organization?

All of our investors at Vicarious understand the fundamental role of artificial intelligence in transforming society and transforming their business. Whether you are Jeff Bezos and you want to have robotic drone delivery of products, or a robotic arm that puts stuff in boxes; or you are Elon Musk and you want to have Teslas that can drive themselves; or you are Mark Zuckerberg and you are working on all sorts of AI initiatives around personal assistants — all of these rely on fundamental advances in AI technology. Investing in Vicarious is a way for each of those leaders to help push the frontier further and faster, and there are going to be tangible and obvious downstream benefits for each of their organizations.

They do have their own in-house teams, but those teams are focused on things that are more directly applicable to the product lines that exist today or will exist in the near future. Similar to how if there is a new type of optical processor, or a new quantum computer coming out, they do not necessarily need to develop it all by themselves in house when they can buy it from Intel or some newcomer who developed it. It is mostly a question of to what extent is pushing the frontier of brain-like AI important to that company or that CEO as a priority, versus something that they can support through an investment and can develop outside of that organization.

There has been a lot written about the reward versus the risk of artificial intelligence, including by one of your investors — Elon Musk. I wonder what your perspectives are on that in terms of the balance between the reward versus the risk of AI?

The balance is firmly on the reward side. That said, every new technology that humanity has created since the invention of fire can be used for keeping warm or for burning stuff. An essential part of developing a new technology is making sure that it is safe and effective and used in a way that you want it to be used for humanity’s benefit.

AI is no different in that way. It is different in scale, on both the positive and the negative side. That is something that we take seriously and all researchers who are working on it take seriously.

It is the type of thing where we have researchers who are studying genetically engineered super virus and nano-technology and asteroid impacts, we should have researchers who are studying AI safety and how to make sure we have human-aligned artificial intelligence. But it should be a relatively academic field, just like asteroid collision is an academic field. It is something that everyone who is in this field and is seriously working on it is aware of the different corresponding safety issues that are part of building this technology. It is something that we take seriously, and something that we keep in view as we develop the AI that we are building.

It seems that there are a great number of jobs that are likely to be replaced potentially in the relatively near-term. How should we as a society think about those people whose jobs are likely to be replaced by artificial intelligence and machine learning?

I am a big fan of work programs and universal basic income. Government and society has a role to play in ensuring a smooth transition between technological disruption. We can have a society where there are tons of jobs for everyone. There is always going to be stuff to do. It is a matter of making sure that when someone was doing one thing and they were used to doing that thing, there are useful transitions to being able to do something new.

One of your concentrations as an undergrad at the University of Pennsylvania was entrepreneurship, and obviously you have taken the theoretical that you have learned there and transitioned that quite successfully to the practical of having started multiple organizations, including Vicarious. I am curious how valuable you find the classroom aspects of entrepreneurship?

You have to learn by doing. I was relatively naïve when I was seventeen years old. I decided to go to Penn because it was the only Ivy [League school] with a business school and a computer science program for undergrads. When I got there, I discovered that the business school was primarily for training investment bankers and management consultants. Maybe this has changed since then, but when I was there anyway, it was not a vibrant ecosystem. There were two courses on engineering entrepreneurship.

Going through the Y Combinator program back in 2008 and learning from the partners at YC and my classmates was a great education in how to run a startup. I learned the right practices for building a company, and it has only gotten better. College has a place and is wonderful, but if you want to learn about entrepreneurship, the only way to do that is by actually getting out there and doing it, and doing it in the presence of great mentors, investors, and advisors.

Where do you see Vicarious in the foreseeable future? As you think three, five years down the road, what are some of your hopes for where the organization will be and some of the areas in which you will have innovated?

There is this great quote I love about business plans: The best business plans are the ones that never change. In our case, we are five years in and I am fortunate to say that we have always had the same plan, which is build the first human level AI and do it by gathering insights from neuroscience, from mathematics, from inductive biases required to create something that learns and thinks like a human, and apply them in a rigorous, scientific way to create fundamental advancements. In the exhaust of that process are things that are commercially useful. In the last twelve months, we have taken on investments from Samsung, ABB Robotics, Wipro, and are also about to announce some additional corporate investors who all see the role of AI in their futures and see how Vicarious’ technology can help them succeed. The world we are headed towards is a world where Vicarious can be an “Intel inside” for artificial intelligence inside a lot of products in a lot of different areas that we come to rely on in our day-to-day lives.

Do you think of the B2B versus B2C implications, the enterprise versus individual implications, as different or part of the same thing?

In our case, I see them as part of the same thing. We are fundamentally a scientific research and algorithms company. We are about creating enabling technologies for building this AI revolution — shovels in the gold rush, if you will. From our perspective, there is not that much of a difference between the technology we provide for a consumer product or for a B2B product. Our full competency here is not designing user interfaces and figuring out how to optimize the virality coefficient of an iPhone app. Our full competency is building the best possible AI software and using it to help humanity thrive. That is what you will see from us, and it does not require a shift in lenses to go between the consumer and the industrial settings.

Series on Artificial Intelligence

This is the ninth interview in our series on Artificial Intelligence. Past interviews are available below:

Left: Antoine Blondeau || Center: Mike Rhodin || Right: Sebastian Thrun
Click for Metisstrategy.com

Peter High is President of Metis Strategy, a business and IT advisory firm. His latest book, Implementing World Class IT Strategy, has just been released by Wiley Press/Jossey-Bass. He is also the author of World Class IT: Why Businesses Succeed When IT Triumphs. Peter moderates the Forum on World Class IT podcast series. Follow him on Twitter Metis Strategy

--

--

Peter High
Metis Strategy

Peter High is President of Metis Strategy, author of Implementing World Class IT Strategy, and contributor to Forbes.