How AI Startups Must Compete with Google: Reply to Fei-Fei Li

Mostapha Benhenda
HackerNoon.com
8 min readMay 17, 2017

--

Google is a giant in artificial intelligence. Every day, their exploits in AI make the news. As a result, AI startups can feel overshadowed by this mega-competitor, and their vision can be cloudy.

Fortunately, to navigate through those murky waters, they can rely on Dr Fei-Fei Li, Director of Stanford’s AI Lab (SAIL). She is also known as the teacher of an online course on neural networks for computer vision. Recently, she became the Chief Scientist of AI & Machine Learning at Google Cloud, but since Google is not evil, she still benevolently offered her expert advice about how to compete with her new employer.

At the Startup Grind Global conference (sponsored by Google), she declared:

Journalist: “How can small startups compete against big players?”

Fei-Fei Li: “Part of it is that the big companies with the tagged data have advantages but they won’t go into deep verticals. And, if you’re a startup you have to be creative.

How do you snowball your data? So design the snowballing of that data. Importantly, develop an empathetic understanding of the problem you’re solving for the customer.”

(…)

Journalist: “Why [did you join] Google?”

Fei-Fei Li (laughs): “I want to democratize AI.”

Yes, AI startups must focus on their customers

Fei-Fei Li advice is right: AI startups must focus on solving their customers problems. This requires empathy and domain expertise, it’s not enough to know how to train machine learning models. Each industry vertical, like manufacturing, healthcare or retail, requires a specific approach and skillset.

AI startups should design their products in a way that encourages users to input data. By doing so, startups can fuel the virtuous circle of AI: collect more data, build better models and products, attract more users, and so on. Designing this snowballing of data is essential.

The virtuous circle of AI

Each specific industry has a relatively small market size for AI, so getting deep into such market is not interesting for tech giants like Google. They prefer to build ‘horizontal’ AI, general-purpose models and Cloud infrastructure, which are used by the startups involved in the various verticals. In this context, AI startups need less AI expertise and data storage capacity, because they can find it on Google Cloud. That’s how AI is democratized, according to Fei-Fei Li.

AI ‘democratization’ is a mask for Google domination

With this viewpoint, startups respect the hegemony of big players in big ‘horizontal’ AI markets. In this AI democracy, startups-citizens remain quietly within their assigned vertical, subjected to the discipline imposed from the above, by the President Google Cloud.

Here again, democracy is used as a rhetorical mask for imperialism. Fei-Fei Li’s philanthropic speech about democratizing AI reminds the way the US President George W. Bush spoke about ‘democratizing’ Iraq in 2003, while he invaded it and took over its oil. Data is the new oil, and Google Cloud is the Bush of AI.

American soldier guarding an oil field as a way to ‘democratize’ Iraq. Likewise, Google Cloud guards data as a way to ‘democratize’ AI.

However, things should not go as planned. As Bush in Iraq, Google Cloud should meet fierce resistance on the ground. They have a quagmire ahead. More precisely, Fei-Fei Li should have a look at “The Structure of Scientific Revolutions” by the American philosopher Thomas Kuhn. In this book, she will be reminded that science is subjected to frequent and disruptive paradigm shifts, which affect its evolution. Therefore, Google presidency on AI can be overthrown by the next scientific revolution. Google Cloud can be swept by the next desert storm. Let’s see how.

The next AI will not need Google data that much

Google owns a lot of proprietary data, but maybe it is not so important in the long term. Current AI technology is based on supervised learning: in order to learn the desired behavior, the AI must be shown a large number of similar instances. A more advanced AI needs much less training data. After all, in order to learn to recognize cars, babies do not need to see them by thousands. Babies don’t need Google data to become intelligent, and neither should an advanced AI.

For example, generative models are a promising step towards this next AI. After being shown a large amount of data in some domain (e.g., millions of images, sentences, or sounds, etc.) a generative model can produce even more data like it. Therefore, a generative model can produce synthetic data to train a second model, based on supervised learning. If startups can rely on this free synthetic data, they won’t need Google Cloud.

Synthetic birds

That’s why generative models are a strategic direction of research for the AI startup community. We are investigating generative models on the Startcrowd open platform, and everyone is welcome to join this effort.

Google is struggling to incorporate fundamental research

Most of Google research culture is unfit for disruptive science. Google has a ‘hybrid approach to research’: research is product-oriented, and it proceeds by incremental changes to existing solutions. Google can still have long-term research goals, but such large goals must be ‘factorized’ into a sequence of short-term goals, each needing to impact Google products.

This research culture is unattractive for some talent. For example, many AI academics like Yoshua Bengio or Jürgen Schmidhuber do not want to join a tech giant: they prefer to keep their independent research agenda, without pressure to impact company products.

Moreover, this Google science policy is vulnerable to paradigm shifts. A lot of major technological progress comes from peripheral topics, which had little impact before, and which suddenly move to the central stage.

One such example is the meteoric rise of AI and neural networks. Within a couple of years, this academic curiosity became the backbone of a new industrial revolution. Mainstream technologists did not see AI coming, and Google founder Sergey Brin was not different.

Google knows the limits of their short-term approach to research. That’s why they establish separate divisions for long-term projects. In AI, their main one is the London-based DeepMind. However, the problem with this policy is that ideas do not circulate well enough between DeepMind and the rest of Google. DeepMind remains quite insulated from the Google mothership. How many Googlers are really following DeepMind research? I guess not much, and that would be interesting to survey Googlers about it.

To remedy this lack of knowledge transfer within Google itself, DeepMind opened a dedicated team at Google HQ in California, and future will tell if it’s effective.

AI startups can better combine fundamental and applied research

The boldest startups can find inspiration from this situation. They can dedicate part of their resources to fundamental research. This perk can attract the right kind of talent, and keep teams stimulated. Startups can go beyond the usual 20% projects allowed to Googlers, and establish a policy of 50% of fundamental research projects.

Google has 20% projects, AI startups should raise to 50%

When fundamental and applied research are conducted by the same people, with dual specializations, the bandwidth between the two is increased, which helps provoking improbable but high-impact breakthroughs. This takes Google’s ‘hybrid approach to research’ one step further: now there is no pressure to impact company products.

This science policy is easier to implement in a small startup with a flat hierarchy. In order to climb the corporate ladder of a large organization like Google, it is a bad strategy to perform research off the beaten path. On the other hand, in a startup, there is no ladder to climb, and much less incentives to conformism. Some conservative Venture Capitalists might dislike this approach to research, but for bold startuppers, it will certainly be a lot of fun, and they may even change the world.

AI startups can use the Cloud to subvert Google

Finally, AI startups can use the Cloud to ‘democratize’ AI within Google itself. AI can be ‘democratically’ spread the other way round. For example, by doing their R&D in the open, AI startups can attract curious Googlers with their AI challenges, and distract them away from their daily jobs. For example, like OpenAI, they can propose their own ‘Requests for Research’. They can even go one step further, and use a dedicated Cloud platform: Startcrowd.

With Startcrowd, Googlers just sit one click away from an exciting AI startup adventure. They can start awesome AI side-projects, while staying comfortably within the walls of the Googleplex, with their monthly paycheck from Google. On Startcrowd, Googlers can enjoy a frictionless startup experience, until they decide to work on their AI project full-time.

What if angry managers find out? No problem, they won’t. Participation can be anonymous, and for additional discretion, user interface will soon be customizable, for camouflage purposes. Forget about Facebook and YouTube: Startcrowd is the place to go for smart procrastination. Life is short. Time to liberate your creativity. Work on problems you love.

The Googleplex. a good place for startcrowdination.

Between Google Cloud and Startup Crowd, the battle will be epic. They can acquire us with their money and data, but we can infiltrate them with our side-projects.

Googlers can start their side-projects in AI on www.startcrowd.club

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMI family. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.

If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!

--

--