Mistral AI Strategy To Beat OpenAI
Mistral AI, the parisian rival of OpenAI, just closed a $415 million funding round at a $2B valuation, after releasing an open source MoE (Mixture of Experts) model that beats GPT-3.5 and other SOTA 70B models like Llama-70B on most benchmarks. This is an impressive achievement for a 8 months old startup. I have to admit that I was a bit skeptical about Mistral when they first raised hundreds of millions in VC funding without any product. But now I am impressed. That’s why I decided to take a closer look at their strategy.
I was able to put my hands on the strategy memo Mistral wrote before raising VC money the first time. In this article, I will comment and analyze it. You can find the link to the original memo at the end of this article.
Limiting factors and limitations
The limiting factor of generative AI (genAI) is the limited number of researchers able to train and deploy foundation models. It is the scarce resource at the moment and I do agree with Mistral about how crucial it is to assemble a team of such researchers.