An incredibly thoughtful leaked memo from Google laid out a vision for the future of AI in which open-source rapidly comes to dominate the category.

What is the Dominant Strategy in AI?

Angular Ventures Weekly
Angular Ventures
Published in
9 min readMay 9, 2023


Angular Ventures Weekly Issue #184: For the week ended May 9, 2023

Subscribe here to receive new Angular blogs, data reports, and newsletters directly to your inbox.

What is the dominant strategy in AI?
David Peterson

Much has been written about who will capture value in this new AI era. Startups or incumbents? Builders of foundational models, or builders of vertical software that leverage those models in their products?

For my part, I always thought that a vertically integrated player — that is, a firm that owns everything from the foundational model all the way up the stack to the software itself — would dominate.

This conclusion stems from Clayton Christensen’s theory of low-end disruption, which he detailed in Disruption, disintegration and the dissipation of differentiability (and expanded upon in his book The Innovator’s Solution). The theory is simple. In markets that are technically demanding (or nascent), an integrated approach will initially win, because integration enables firms to produce superior products.

Over time, modularized products get to be “good enough” and the dominant strategy shifts from vertical to horizontal. But it can take awhile! Just look at Intel and its decades of vertically integrated dominance.

This theory is famously flawed — Christensen predicted Apple’s demise again and again to no avail — but it still has a lot of explanatory power in the B2B world. And given the speed at which OpenAI achieved dominance, I thought perhaps we were witnessing the next, great vertically integrated behemoth taking shape.

But it seems the dominant strategy may have already shifted.

Just about two months ago, Meta open sourced the code to LLaMA, its large language model. One week later, the weights were leaked as well, effectively open sourcing the inner workings of the model itself. And since then, the technical advances achieved by the open source community have been nothing short of astounding, putting both OpenAI and Google on their heels.

Last week, an internal Google memo titled “We Have No Moat, And Neither Does OpenAI” was leaked. And according to the author, the technical advances from the open source movement have fundamentally changed the competitive landscape.

Specifically, the benefits to building your own foundational models have all but disappeared, due to two shifts:

  • 1) Small models can now outperform large models for a fraction of the cost. “LoRA updates are very cheap to produce (~$100) for the most popular model sizes. This means that almost anyone with an idea can generate one and distribute it. Training times under a day are the norm. At that pace, it doesn’t take long before the cumulative effect of all of these fine-tunings overcomes starting off at a size disadvantage. Indeed, in terms of engineer-hours, the pace of improvement from these models vastly outstrips what we can do with our largest variants, and the best are already largely indistinguishable from ChatGPT.”
  • 2) Data quality scales better than data quantity. “Many of these projects are saving time by training on small, highly curated datasets. This suggests there is some flexibility in data scaling laws. The existence of such datasets follows from the line of thinking in Data Doesn’t Do What You Think, and they are rapidly becoming the standard way to do training outside Google. These datasets are built using synthetic methods (e.g. filtering the best responses from an existing model) and scavenging from other projects, neither of which is dominant at Google.”

It seems the era of vertical integration in AI may already be over.

Here’s another analogy for you. If OpenAI is Intel, the open source community is the Taiwan Semiconductor Manufacturing Co (TSMC). Intel benefited from vertical integration for decades. But TSMC was able to wedge into the market by realizing that there were many chip designers targeting niche markets who couldn’t afford to build their own fab. In working with all sorts of chip designers and suppliers, and focusing exclusively on manufacturing excellence, TSMC eventually surpassed Intel’s manufacturing capabilities, fundamentally disrupting Intel’s business.

The open source community, in this somewhat tortured analogy, achieved in weeks what TSMC achieved in years. What just happened is like if TSMCs manufacturing capabilities just got open sourced and made available to everyone for free. (The benefit of playing with bits instead of atoms!).

In other words, it’s as if we’re all chip designers now with our own state-of-the-art fabs running in the cloud.

So what are you going to build?



May 31 / US Immigration Best Practices
Jennifer Schear, Founding Partner, Schear Immigration Law Firm


Looking Back to Move Forward
How to survive this extraordinarily exciting and wildly disconcerting age of generative AI.

LLMs and the Future of Customer-built Software Design
How will LLMs change software development and design?

Navigating AI’s iPhone Moment
A venture perspective on LLMs and what’s next…

Principles for AI Product Design
Or how we could all learn a little from Google’s conversion optimizer.



Google: “We Have No Moat, And neither does OpenAI.” An incredibly thoughtful leaked memo from Google laid out a vision for the future of AI in which open-source rapidly comes to dominate the category. While the conclusion can be disputed, we agree with it. However, one of the most interesting sections of the memo was the comparison of AI to other technology waves that Google dominated by hitching its wagon to open source: “The value of owning the ecosystem cannot be overstated. Google itself has successfully used this paradigm in its open source offerings, like Chrome and Android. By owning the platform where innovation happens, Google cements itself as a thought leader and direction-setter, earning the ability to shape the narrative on ideas that are larger than itself. The more tightly we control our models, the more attractive we make open alternatives. Google and OpenAI have both gravitated defensively toward release patterns that allow them to retain tight control over how their models are used. But this control is a fiction. Anyone seeking to use LLMs for unsanctioned purposes can simply take their pick of the freely available models. Google should establish itself a leader in the open source community, taking the lead by cooperating with, rather than ignoring, the broader conversation. This probably means taking some uncomfortable steps, like publishing the model weights for small ULM variants. This necessarily means relinquishing some control over our models. But this compromise is inevitable. We cannot hope to both drive innovation and control it.”

Specialized AI for the win? Emergence Capital has long been one of the evangelists of vertical SaaS, and they are now making the case for vertical genAI. “As we settle into the generative AI era, we believe that the key step for founders will be to apply the lessons learned from the platform shifts that came before. What began as broad, horizontal software narrowed and became specific, and later became industry-focused. In this next age of technology, specialized software and proprietary data are even more important than they were in the cloud era. With the rise in popularity of AI, companies are now trying to answer any question a consumer could possibly ask. But in business settings, similar to all cloud software, AI’s value will be most powerful when tightly focused. This is because generic AI models were trained on large swaths of data instead of the narrow, specific use cases required to provide value to these B2B organizations.”

The defense establishment is nervous about AI. Craig Martell, the Pentagon’s chief digital AI officer, is sounding the alarm over AI: ““My fear is that we trust it too much without the providers of that service building into it the right safeguards and the ability for us to validate” the information, Martell said. That could mean people rely on answers and content that such engines provide, even if it’s inaccurate. Moreover, he said, adversaries seeking to run influence campaigns targeting Americans could use such tools to great effect for disinformation. In fact, the content such tools produce is so expertly written that it lends itself to that purpose, he said. “This information triggers our own psychology to think ‘of course this thing is authoritative.’”

Microsoft is playing hardball over Edge. The software giant, which by all accounts is ahead in the race to capitalize on AI through its smart partnership with OpenAI, appears to be making aggressive moves to push its browser, Edge, into the enterprise. “Microsoft has now started notifying IT admins that it will force Outlook and Teams to ignore the default web browser on Windows and open links in Microsoft Edge instead…This relentless push of Edge, including through Windows Update, could all backfire for Microsoft and end up alienating Edge users instead of tempting them over from Chrome.”

GenAI fears in the enterprise. Samsung banned employees from accessing ChatGPT after some corporate data leaked onto the platform. “The company conducted a survey last month about the use of AI tools internally and said that 65% of respondents believe that such services pose a security risk. Earlier in April, Samsung engineers accidentally leaked internal source code by uploading it to ChatGPT, according to the memo. It’s unclear what the information encompassed. A Samsung representative confirmed a memo was sent last week banning the use of generative AI services.”


Enterprise sales keep getting tougher. Jason Lemkin, VC and founder of the SaaStr conference, has always had a strong twitter game, but he’s really ramped it up lately. This week, he tweeted a breakdown of Cloudflare’s quarterly report — complete with some fantastic stats on their sales efficiency and the impact of the slowing economy on their sales performance. “Their top reps are still closing, and closing well — at 129% of quota. And their productivity is only down 1%. “The top 15% of our sellers have achieved 129% of quota over the last four quarters. Newer reps are doing just fine, too. “27% of the top reps started in the past 18 months.” This dispels a bit of the myth that the seasoned reps are keeping all the good deals and business. But … 100 reps essentially sold nothing. They only contributed 4% of reps. Yikes. But I’m seeing that at many SaaS companies today, as the best keep on selling … but many of the rest struggle in a world that is no longer order taking.”

Back to the office. Sam Altman, former head of YC and currently the CEO of OpenAI, called the remote work “experiment” a mistake: “I think definitely one of the tech industry’s worst mistakes in a long time was that everybody (thought they) could go full remote forever, and startups didn’t need to be together. There was going to be no loss of creativity. I would say that the experiment on that is over, and the technology is not yet good enough that people can be fully remote forever, particularly on startups.”


Hanging up the hat. One of my favorite VC writers, Charlie O’Donnell from Brooklyn Bridge Ventures, wrote a heartfelt post on his decision to stop actively investing as a VC. Charlie’s writing was always sincere and real, and this post — while the conclusion is controversial — is no exception: “What I’ve realized since getting married and having a kid is that the way I’ve done this job just isn’t sustainable over the next twenty years the way I’ve been doing it for the last twenty — not with a spouse and a fantastic little kid who I want to spend as much time with as possible before she goes to school. There’s a lot more competition than ever from scouts, syndicates, and a whole generation of Gen-Z folks with tiny pools of capital and gobs of free time that spend all their time specializing in the narrowest of verticals trying to predict the next big thing — and the speed of how fast the next big thing blows up these days… I’ve never seen anything like it.” For a taste of his analytical side, check out this post on why the AI investment opportunity may not be as large as many think.


Lulav Space has partnered with Sidus Space to offer solution for guidance navigation and control on lunar missions.

Reco, a leader in SaaS security, announced it is partnering with UiPath to effectively increase security in the company’s SaaS collaboration platforms.