How to Make Moonshots
Astro Teller says that at Google[x], failure is indeed an option. So is changing the world.
Google has long declared itself an unconventional company. But its division that takes on long-term, risky projects, Google[x], makes the rest of the company look pretty staid. Now led by Astro Teller (born Eric before he adopted a first name that really suited him), Google[x] deliberately takes on challenges that seem to fit more comfortably in the pages of pulp science fiction than on the balance sheet of a public company. Its first project was the self-driving car, and subsequent ones include Google Glass, the smart contact lens, the Google Brain neural network, the Loon Project that delivers Internet service via balloon, and a project that hopes to release nanoparticles in the bloodstream to detect early disease.
But ultimately, the greatest contribution of Google[x] might not lie in its projects but its mindset. Astro Teller in particular understands that to make significant advances in the era of Moore’s Law, a research division must be willing to entertain what sounds crazy, venturing just beyond the zone of reasonable yet keeping one hand on the tether of the possible. It must be willing to fail, yet be realistic enough to understand the limitations on near-term technology. And, since Google is a profit-making company, Teller wants to make sure that his projects have at least a conceivable way of making some money if the planets align and the science works out.
Editor in Chief, Backchannel
Closing keynote of South by Southwest Interactive
given by Astro Teller,
Captain of Moonshots, Google[x]
on March 17, 2015
I started my second company in 1999. BodyMedia was set up to take advantage of the future of wearables — sensors and computing worn on our bodies in any and all ways that could make our lives better.
The first thing we made was a 12-lead electrocardiogram vest — a long-term wearable heart monitor for older people with known heart conditions or risks. At the time no one had ever made something you could just put on like clothing and have it just work without skin shaving, adhesive or gels — all at the time considered required for getting a usable ECG signal. We spent the better part of six months on this and we got it working! We’d built out the business plan. And then, almost as an afterthought, we asked a few people between the ages of 65 and 80 (our target age group) to come into our office to try it on and tell us what they think of it.
Those interviews didn’t go well. Bottom line: people weren’t going to wear it. “But what if it would save your life?” I don’t know. Maybe. “What if it would make it so you could FLY?!?” I guess. Sometimes, maybe. Shrug. One week later the vest was in our “things that didn’t work” cabinet and the company was going through a restart.
My failure wasn’t having these people come in to tell us what they thought. The real failure was that we’d done that last when we should have done it first. We could have learned the exact same thing in a few days instead of in a few months. We could have discovered the fatal flaw with our work much cheaper and much faster. Lesson learned. The faster you can get your ideas in contact with the real world, the faster you can discover what is broken with your idea. Seeking out contact with the real world means hearing and seeing things you don’t want to hear and see — because they’re discouraging and disheartening when you’re pouring your all into something. But better to learn that after a few days then after a few months. The more work you do before you get the learning, the more painful the learning will be, and the more you’ll unconsciously avoid those learning moments.
And getting those painful negative examples isn’t enough. You have to then turn those negative signals from the world into something you can use. Some new fact about the world or way of approaching your problem. In our case at BodyMedia, what we learned was “People are interested in the value wearables can bring but if they can’t put the item on or take it off while in public, it isn’t likely to fit into their lives.” And while the learning was painful in the moment — it paid off. Years later BodyMedia was acquired by Jawbone.
This lesson of doing my failing at the beginning was something I took with me to Google[x], which is just turning 5 years old.
At Google[x] we’ve been pushing ourselves to get out into the real world as much as possible as fast as possible and I’m happy to say that we’ve chalked up a lot of learning and a lot of progress along the way. The bumps and scrapes required to learn and improve is something you and I and everyone here share as life experiences. I’ll share today some of the stories of what we’ve learned, how we learned it, and how that is shaping the evolution of Google[x].
Over the past five years we’ve been hard at work inside Google[x], the lab we affectionately call our “moonshot factory.” People sometimes call it a research lab — but we think of a moonshot factory as something quite distinct and different, and the name reflects that. I was sitting with Larry Page just after Google[x] was birthed and trying to work out how we should talk about X’s mission. I couldn’t get a clear summary from him so I just started throwing out examples for him to shoot down. “Is it a research center?” No. Good, agreed. “Are we trying to be just another business unit for Google?” Nope. “How about an incubator?” Sort of. Not really. Kennedy’s vision statement to the nation in 1961 that we put a man on the moon by the end of the decade was the original moonshot so I was delighted when I got to “Are we taking moonshots?” and Larry said “Yes, that’s what we’re doing.”
By saying we’re taking moonshots, we mean we’re going to go after something that’s 10 times better rather than incremental, 10% kind of progress. And it also captures the risk and long-term nature of what we’re trying to do. (e.g. self-driving cars and smart contact lens). By saying it’s a factory, we’re reminding ourselves that we have to have real impact — we should take on research-level risks but ultimately we’re developing products and services for the real world. And it also means we have to continue to create real value so Google will continue to support us.
From one perspective our approach to taking moonshots can be summed up in this picture. This is our blueprint for whether we should try to do something. But the blueprint we have on how to try to do something has always been, on every aspect of each project, embracing failure — to run at all the hardest parts of the problem first — as fast as possible. What we’ve learned is that the only way to make progress is to make a ton of mistakes — to go out and find and even create negative experiences that help us learn and get better.
We’ve all read the media coverage of various entrepreneurs’ and companies’ ups and downs. But what the nice neat media stories never quite capture or admit to is the feeling in the pit of your stomach when you’re not sure what to do to get from where you are to where you want to be. We all have those feelings. I have those feelings. Our project leads at Google[x] have those feelings. You are not alone. The truth is: no one knows the best perfect right way to solve any problem, especially big meaningful problems.
Many of the failures Google[x] has had over the past five years are ones that we’ve had to live out in broad daylight with everyone telling us we’re crazy. Even for me it’s not always fun, and sometimes we’ve even done a bad job at failing. But it’s always been the right thing to do. And I think a lot of what we’ve learned could be applicable to the challenges you’re taking on.
Let’s ease into our failures with a series of them that were planned. Where the failures were actually a feature and not a bug.
One of the Google[x] projects that has made tremendous progress in the past several years is Project Loon. The goal for the project is to bring Internet connectivity to the other 4B people on the planet who currently have little or no connection to the digital world. We hope to be able to do this, in the near future, by putting a network of balloons up into the stratosphere, between 60,000 and 80,000 feet up in the air, well above the weather and well above where airplanes fly. Each of these balloons you can think of like a cell-tower in the sky that can talk directly to phones on the ground and to other balloons around it. This is much too high to tie the balloons to ground and the wind is too strong to stay over a particular part of the earth indefinitely. But we’ve found ways to make the balloons rise and fall enough (about 10,000 feet) so that the balloons can pick different wind speeds and directions and use that to sail the winds and have some influence over where they will be in an hour or in a day.
When we started though, we couldn’t yet control where they went and we couldn’t yet make them come down when we wanted to (which we can also do now). We were just working out a lot of the basic avionics issues of making a cell tower in the sky that was 1% the weight of what you’d put on a cell tower, using 1% of the power, at about 1% of the cost, and making sure it worked at 2% of normal air pressure and at temperatures down to 90 degrees below zero. Since we couldn’t steer them yet and since we couldn’t tell them to come down when we wanted, and since we really didn’t want them wandering off into other countries whose permission we hadn’t yet asked, we built the balloons to fail. We do it differently now but we used latex for those early balloons. Latex stretches so if you put some helium in it and let go and as it goes up, it expands because the air higher up is less dense. But that expansion makes the balloon less density so it rises some more. And this continues until about 100,000 feet when the Latex gets so thin (and so brittle from the cold) that it explodes. You can see such an explosion right here. So failure was, for the early Loon testing, a critical safety valve for the project. No balloon would stay up in the air more than a few hours.
Sometimes though, failure isn’t a feature. In the worst cases, it isn’t even something you can learn much from. Sometimes it is just a cost you pay for the learning you’re doing. Even then, getting out into the real world is the right thing to do. Our simulators and spreadsheets said, yes, sure you can hypothetically provide continuous coverage with a fleet of balloons sailing based on stratospheric wind patterns. But nothing beats actually getting balloons into the sky for months on end that need to ride all these winds around the globe so we can test these hypotheses. We’ve been doing just that for the past 2 years and we have it working great now. We can routinely let go of a balloon on one side of the world and guide it to within a few hundred meters of where we want it to go on the other side of the world, 10,000km away. But it wasn’t always that way. It took many hundreds of tries and experiments and failures to get them working that well — and every failure meant a balloon headed somewhere we didn’t want it. And that meant taking it down and going to collect it. Sending teams north into the arctic circle to stuff a balloon into the back of a helicopter and out into the south pacific by boat to collect balloons. Not how we want to be spending our time, obviously, but it was worth it to get the practice we’ve gotten steering the balloons by teaching them how to sail.
One of our projects is focused on building a fully self-driving car. If the technology could be made so that a car could drive all the places a person can drive with greater safety than when people drive in those same places, there are over a million lives a year that could be saved worldwide. Plus there’s over a trillion dollars of wasted time per year we could collectively get back if we didn’t have to pay attention while the car took us from one place to another.
When we started, we couldn’t make a list of the 10,000 things we’d have to do to make a car drive itself. We knew the top 100 things, of course. But pretty good, pretty safe, most of the time isn’t good enough. We had to go out and just find a way to learn what should be on that list of 10,000 things. We had to see what all of the unusual real world situations our cars would face were. There is a real sense in which the making of that list, the gathering of that data, is fully half of what is hard about solving the self driving car problem.
A few months ago, for example, our self-driving car encountered an unusual sight in the middle of a suburban side street. It was a woman in an electric wheelchair wielding a broom and working to shoo a duck out of the middle of the road. You can see in this picture what our car could see. I’m happy to say, by the way that while this was a surprising moment for the safety drivers in the car and for the car itself I imagine, the car did the right thing. It came autonomously to a stop, waited until the woman had shoo’d the duck off the road and left the street herself and then the car moved down the street again. That definitely wasn’t on any list of things we thought we’d have to teach a car to handle! But now, when we produce a new version of our software, before that software ends up on our actual cars, it has to prove itself in tens of thousands of situations just like this in our simulator, but using real world data. We show the new software moments like this and say “and what would you do now?” Then, if the software fails to make a good choice, we can fail in simulation rather than in the physical world. In this way, what one car learns or is challenged by in the real world can be transferred to all the other cars and to all future versions of the software we’ll make so we only have to learn each lesson once and every rider we have forever after can get the benefit from that one learning moment.
So most of you have probably heard of Glass. This is an example of an [x] product that we knew we had to get out into the real world at a very early stage to see how it might work. People have been envisioning how our physical and digital lives will merge through the use of smart glasses in sci-fi TV shows and movies for more than 30 years now. Knowing how to convert that into a product that can be made today and will really work for people is a very different matter. This is exactly why we created the Glass Explorer program.
The program allowed us to get an early version of the device into the hands of a lot of different people. The Explorer edition of Glass wasn’t for everyone, but the Explorer program pushed us to find a wide range of near term applications and uses for something like Glass. From firefighting to surgery, from cooking to learning to play the guitar, interacting with information hands free clearly has a lot of use cases. We also quickly saw areas for technical improvements — the battery life was a major obstacle and an area where we had to invest — but the program was designed just as much for social testing as it was for technical testing. We needed fearless pioneers, and we’re grateful to everyone — probably many of you in this room — who came on this adventure with us.
In retrospect, we made one good decision and one bad decision around the Glass Explorer program. The good decision was that we did it. The bad decision was that we allowed and sometimes even encouraged too much attention for the program. Instead of people seeing the Explorer devices as learning devices, Glass began to be talked about as if it were a fully baked consumer product. The device was being judged and evaluated in a very different context than we intended — Glass was being held to standards that launched consumer products are held to, but the Explorer edition of Glass was really just an early prototype. While we were hoping to learn more about how to make it better, people just wanted the product to be better straight away — and that led to some understandably disappointed Explorers.
But of course, we learned a lot from the very loud public conversations about Glass and will put those learnings to use in the future. I can say that having experimented out in the open was painful at points, but it was still the right thing to do. We never would have learned all that we’ve learned without the Explorer program and we needed that to inform the future of Glass and wearables in general.
Glass graduated from [x] earlier this year, so stay tuned for that future. And in the meantime, those of you weighing up your own execution risks and trying to figure out a plan for testing market readiness for a new product or technology, my advice is — go out and talk to people, and prototype, and talk some more, and prototype some more, and create as many opportunities to learn as you can. You’re never going to figure out the right answer sitting in a conference room.
One of our earliest projects at [x] was called Genie. We worked on it for about 18 months and then spun it out into a stand alone business where it has been growing and thriving for the past two and a half years. The original goal of the Genie project was to fix the way buildings are designed and built by building, basically, an expert system, a software Genie if you will, that could take your needs for the building and design the building for you. The problem is there and very real. The built environment is an $8 trillion per year industry that is still basically artisanal. It produces almost half the world’s solid waste and nearly a third of the worlds CO2 emissions. Over the first 18 months of the project, though, we found out that the system we envisioned couldn’t connect into the infrastructure and ecosystems for building the built environment because that software infrastructure is piecemeal and often not software at all but just knowledge trapped in the heads of the experts in the field.
Having learned this, the company, now called Flux, took a huge step back. The goal for the company is the same but it had realized through these extended rounds of interaction with structural engineering firms, architecture firms, developers, and contractors that before such a software Genie could even be contemplated, a software foundation and data layer had to be laid, much as you would do with a building.
The picture here in blue are the zoning areas for downtown Austin. You see that lighthouse-like spray-out from the center of the map? Those are site lines — you can’t build a building in Austin that blocks the view of the state capitol building dome along these lines. And every one of the other circles and squares on that map is another zone with its own special rules. There are many areas where a half dozen or more zoning regions applies to the same plot of land. Imagine for a single plot of land trying to figure out from all those rules (many of which change from year to year) what exactly you’d be allowed to build there. Even worse, imagine trying to ask, across the whole city, “I want to build a building like this. Where are places where the zoning would allow me to build it?” In the lower right hand here you can see Flux now answering this question automatically. This is an example of the groundwork the company is laying: creating an automated way to keep track of various cities’ building codes and their ramifications for building design.
Flux is one of the successful graduations from Google[x], but the only one to date that we’ve moved out into an independent company. We don’t have a playbook for how these graduations “ought” to work and that has allowed us to remain flexible, to run experiments on the gradation process itself, and learn how to get the best possible graduation style and timing for each project given its unique needs and opportunities.
Project Wing is our project for delivering things via self-flying vehicle. There is a huge amount of friction left in how we move things around the world. If much of the remaining cost, safety issues, noise, and emissions could be removed from deliveries while making them take minutes instead of hours, we see great positives that can come from this. Sergey pushed that team out the door last summer…literally out the door to the Australian bush, telling them to go try to deliver something in the real world to someone who wasn’t a Googler. This actually managed to both prolong a failure of ours and help us to end it and how that worked out will be useful learning for us for other [x] projects.
When Project Wing started, the first and most obvious question was “Can we use an off-the-shelf vehicle to do this service?” It would be fantastic if we could because then we could focus on the software and sensor issues and move through the learning a lot faster. Sadly, we satisfied ourselves pretty quickly that for speed, payload size, and efficiency reasons, no existing vehicle was even close enough to start from. That then begged the question of which sort of vertical takeoff and landing vehicle style we would gravitate to and in the end we picked the tail-sitter style. A tail-sitter sits up on its haunches when it’s on the ground, lifts off straight up into the air using rotors like a helicopter, and then falls forward into a plane-like position for forward flight, becoming a flying wing like an airplane. Then at the destination it leans back into hover mode again. Basically, this vehicle morphology is mechanically simple but harder than many other vehicle forms from a control systems perspective. But since the original Wing team was stronger on control systems than on system engineering of new airborne vehicles, this seemed like a good trade-off. Plus software is getting better faster than hardware in most domains so shifting the hard part to software was a reasonable thing to try.
Unfortunately, the tail-sitter was not the right choice. It doesn’t hover well in higher winds and it sloshes the cargo around each time it leans back and forth. I would say that 50% of the team felt this after 8 months and 80% of the team was confident about this 1.5 years into the project. But we were resistant to giving up on it because we were conflicted. We hate sticking with things once it looks likely they are the wrong path. On the other hand, we wanted to get out into the world as fast as we could and if we went back to the drawing board, it felt like it would delay doing what is one of the central mantras at [x], “Get out into the world and start racking up high quality real-world experiences and learning.” It was in this context, and the team debating this issue heavily, that Sergey just decided for the team by giving them a deadline of 5 months to get out into the world and do some real deliveries to non-Googlers. This had two effects. The first was that it caused the team to double down on the tail-sitter design since there was no way to make anything else work well enough in 5 months. Given that we already knew this vehicle design was probably wrong, this seems bad on the surface and maybe was in some ways not the right thing to do. On the other hand, we did get out into the world, we did do those delivers to non-Googlers (in Queensland, Australia last August), and we did learn a ton from doing it. While it prolonged the wrong path for 5 months until we had done the deliveries, as soon as the team came back from Australia, they were freed up, with no impending deadline, to do what many of them had wanted to do for more than a year by that time, which was to move away from the tail-sitter design. And so perhaps Sergey’s pushing the team out the door, even if it prolonged the tail-sitter design by 5 months, also made it possible for us move on after that. Without that deadline, maybe it would have taken even longer to move on from the tail-sitter design.
The team had, even before they went to Australia, taken another hard look at whether there was any off-the-shelf vehicle which could work for our purposes and, having decided again that such a vehicle still didn’t exist, they’d been prototyping a new kind of vehicle for a few months in the background. Since returning from Australia they’ve been hard at work on this new vehicle, the control systems that go with it, the sensors that go on it, and the ways it will provide the service and we look forward to telling you about that sometime later this year.
Now I have a story about failing to fail. One of the Google[x] projects making great progress over the past year or so is Makani. The goal of the Makani project is to build an airborne wind turbine, an “energy kite,” that can harness the power of the wind at a fraction of the cost per kilowatt of traditional onshore and offshore wind turbines. Such a system if it worked as designed would meaningfully speed up the global move to renewable energy.
The basic opportunity with wind turbines is that the higher up you go, the faster (and more consistent) the wind is. And that is very attractive since the power of the wind goes up with the cube of the wind speed. But large turbine’s today, the kind that have the hub for their blades at about 100 meters, already weigh 200 to 400 tons. That is a huge amount of weight to manufacture, move to the site, and install. And roughly the weight of the turbine goes up at nearly the cube of the height of the tower, so the net benefit to making these turbines taller is not as big as you might think.
But the version of the Makani energy kite that we’ll start flying next month weighs 1% as much and the center of the virtual circle it draws in the sky is not at 100m but at 250m, up where the winds tend to be both stronger and more consistent. It lifts off its perch and draws power up a tether, running its propellers quite like the tail-sitter I just mentioned. But once it gets out to a tether length of about 450 meters, it goes into crosswind flight — these big circles you see here. And as the wind blows through this circle it describes in the sky, instead of pulling power up the tether to run its propellers, it puts drag on its propellers, making them into 8 flying turbines and passing 600 kilowatts back down the tether.
The version of our energy kite that is about to start flying next month is 84 feet across. But to learn about all the different flight modes this sort of system would have to deal with elegantly, a 28 foot version (which is what you see flying here) was built first. Larry Page told me, a little over two years ago, that he wanted to see us crash at least five of these scale versions of the energy kite. Obviously he wants us to be safe and we work very hard to be safe in everything that we do. What he meant by that was that he wanted to see us push ourselves to learn as fast as possible and though the learning from the crashing itself would be close to zero, he was pointing out that if you aren’t failing, if you aren’t breaking your experimental equipment at least occasionally, you could be learning faster. In the spirit of that request, we did a lot of our flying at one of the windiest and gustiest places in North America, Pigeon Point in Pescadero, California. This pushed our system as hard as it could be pushed, with winds changing by 20 mph in seconds or high winds changing directions by 90 degrees within a few seconds. And yet, we failed to fail. We learned a huge amount from the hundred plus hours of flight time we accumulated with this scaled version of the energy kite, but we never crashed it. Not once. And it says something about Google[x] that we’re all a little conflicted about that.
One interesting form of failure is the kind that you don’t see coming. When the part of the project you assume will be easy turns out to be one of the hardest parts. That happened to Project Loon in a big way. Loon massively underestimated the difficulty of keeping balloons aloft for an extended period of time — like, we missed by a factor of 10 or 100. In June 2013 when we first tested Loon in New Zealand, we were keeping some balloons up for a few days at a time, but often just for a few hours. At first we simply assumed it shouldn’t be that hard to make super-pressure (that is non-stretchy) balloons that would stay up for more than 3 months at a time and it was only after we’d be trying and failing to make much progress on this for 2 or 3 quarters that it became clear this was going to be a much bigger learning process than we’d planned around. After that, the process became one of creating repeated opportunities for making the balloons fail in ways that taught us something, for learning more and more about what was causing them to fail so that we could fix those things.
The problem is that we would typically look the balloon over on the ground and everything seemed fine. Then we’d send it up to 60K to 80K feet and then it would spring a slow leak. These balloons, when inflated are the size of this stage and the leak could be the size of a pin prick. And the leaks would only appear once the balloon was at 2% atmospheric pressure, only once they were going through temperature swings between day and night of around 150 degrees celsius, only once it was in high shear winds, and so on. So how do we discover how those leaks appear? How can we reliably recreate the problems on the ground? There is no box you can put something 20m across inside and subject it to those kind of conditions.
We tried testing in South Dakota during a polar vortex last winter to simulate stratospheric conditions on the temperature front. We’ve over-inflated them on the ground until they begin to leak just to see what that can teach us. We literally ran an experiment in our factory to see if how fluffy the socks were of the techs building the balloons affected the likelihood that the balloons later had a leak. And yes, it turned out that fluffy socks help since the techs have to walk around on the balloon material as they’re building it. In fact, to control for how they walked around on the material, we had them do a line dance together first all wearing thin socks and then all wearing the fluffy ones! And often, because there is no good way to recreate the problem on the ground, we had to laboriously form hypotheses about why the leaks were happening, do design changes to the balloon and then fly balloons with and without that design change to run controlled experiments and then see what happened. But since the leaks don’t always happen this was a very painful, slow way to find out if the design changes had helped or not.
We can laugh about this now because we’ve mostly fixed this problem but at the time it was quite stressful. Now, thankfully, balloons stay up for 6 months at a time, well beyond the 3 months we think we need for a viable service.
Back to the self-driving cars. The team drives a thousand miles of city streets every single day, in pursuit of moments that stump the car. We could have taken a MUCH easier path than the one we’ve chosen. Two years ago we had a perfectly good freeway commute helper. Freeway driving was easy for our cars at that point. You stay in your lane, change lanes occasionally, and don’t hit the guy in front of you — there’s the occasional poor driver who makes things a little interesting, but the car had basically mastered freeways.
In the fall of 2012, we wanted to get feedback from Googlers who weren’t on the self-driving car team. We asked people to volunteer to use our Lexus vehicles running our self-driving software during their commutes to work. We were that done, two and half years ago, that we gave people who weren’t part of [x] cars to take home and use. They could drive the Lexus to the freeway, push a button, and let the car drive, until their exit approached and they’d take back control of the car for the rest of their trip. We probably could have made a bunch of money just selling that.
But this real-world testing taught us something that steered us off that path we’d been on. Even though everyone who signed up for our test swore up and down that they wouldn’t do anything other than pay 100% attention to the road, and knew that they’d be on camera the entire time…people do really stupid things when they’re behind the wheel. They already do stupid things like texting when they’re supposed to be 100% in control…so imagine what happens when they think “the car’s got it covered.” It isn’t pretty. Expecting a person to be a reliable backup for the system was a fallacy. Once people trust the system, they trust it. Our success was itself a failure. We came quickly to the conclusion that we needed to make it clear to ourselves that the human was not a reliable backup — the car had to always be able to handle the situation. And the best way to make that clear was to design a car with no steering wheel — a car that could drive itself all of the time, from point A to point B, at the push of a button.
What’s funny is that over time, the self-driving car’s team’s success is becoming one of their biggest problems. The better you do at your job, the longer you have to wait for the next negative example that you can learn from — our cars are driving a thousand miles a day in Mountain View trying to find that next situation that we can learn from.
Failure doesn’t have to be “not succeeding.” Failure can be “We tried that and it didn’t work. Now we know more than we did yesterday and can go forward smarter.” It can also be “We’ve now tried this enough times and in enough different ways that we now think we should redirect our energies towards one of our more promising projects.”
As Google[x] is turning 5 years old and I look back over the past five years, I see plenty of mistakes we made. Cultural mistakes, engineering mistakes, product mistakes, and more. And when I see that parade of mistakes in my mind’s eye what I wish most is not that we could have avoided them. I don’t think its possible to have mistake-free learning and progress. I just wish we could have made all those mistakes faster.
Google[x] has come a long way and I’m proud of what our teams have accomplished. I would like to think we’ve made good progress in large part because of the experiments we’ve run, the negative results we’ve earned along the way, and by how we’ve paid attention to and responded to those negative results. We have graduated more than 10 projects from [x] at this point, some of which are more mature (like the Google Deep Learning Network we graduated 2 years ago) while others (like Google Glass or Flux) have a lot of direction but they are hardly done.
The projects at Google[x] still have very hard work and significant learning ahead of them. By design! They wouldn’t still be with us if that weren’t true. And I’m very grateful Google has the long term vision and commitment to allow us to run this process.
There is a temptation to think we’ve done all this despite our failures. The truth is exactly the opposite. We’ve accomplished this progress by harnessing our failures.
I’ve always wanted [x] to do more than work on its own moonshots. I would love to see Google[x] play a role in inspiring more moonshot thinking in other groups. So even if you’re not building a self driving car, I hope you can take away something from our approach and set yourself up for creative, productive failure!