Thank you, OpenAI

Ann Miura-Ko
6 min readFeb 28, 2023

--

Ann (@annimaniac) was one of the first investors in Lyft, Starkware, Studio, and Mem.ai. She has been on the Midas List for the last 5 years and was named on The New York Times’ list of The Top 20 Venture Capitalists worldwide. After earning her PhD in cybersecurity risk modeling from Stanford University, Ann co-founded Floodgate — one of the first seed-stage VC funds in Silicon Valley. Over the last 12 years, we have been investing in AI with current investments like: Applied Intuition, Mem.ai, Hebbia, Blooma, Arch and more.

There are very few moments in the history of technology that we have witnessed something truly legendary; opening a personal computer for the first time, entering a query into a browser. Today’s legendary moment has Open AI to thank. None of us are soon to forget the first time we generated a realist painting on DALL-E or asked ChatGPT to write us a poem; a life-altering, believe-in-magic kind of experience.

What has felt like a decades-long, steady drip of progress in AI has seen a surprising inflection point with the launch of ChatGPT. This powerful interface has led the world to believe that language models and AI are capable of interacting with humans in a new way — representing a societal inflection just as much as a technical one. As one entrepreneur said to us recently — language is the API to humans, and Open AI has shown us that compute can now access humans in a new way. In a chat-based interface, we see how AI empowers us to collaborate with compute rather than purely instructing it.

We are fortunate to have had the opportunity to spend time with members of the Open AI team at their Headquarters in San Francisco. A week ago, they kindly hosted Floodgate founders and friends to explore the power and future of Open AI’s technologies. Later, over lunch, we harnessed the collective knowledge and imagination of industry leaders, practitioners, and experts to engage in discussion on the future landscape and applications of AI at large.

From inspiring conversation, here are some observations and areas we think are most interesting for innovation:

AI hasn’t eaten the world… yet.

What can history teach us about the future of AI and LLMs? On a comparative timeline, ChatGPT is closer to the launch of the iPhone than the release of killer apps like Twitter or Uber/Lyft. It’s a powerful platform that harnesses the potential of AI, but it doesn’t necessarily capture ALL of the value that it unlocks. With the public’s eagerness to adopt new tech, ChatGPT has gained impressive traction, with 100M users in just two months. But, products based on LLMs, for the most part, have yet to be embedded into the regular rhythm of human life — unless you’re a student or a marketer using a tool like Jasper. The glimpse we have seen shows that there is still so much untapped potential.

We will transition to a self-driving enterprise experience

Our investment thesis of a self-driving, enterprise-integrating AI into the software stack didn’t come to fruition years ago, but now, AI has entered the social consciousness, and it’s changing the AI stack entirely. Whether it is implementing a vector database and the associated workflows to make that possible, or ChatGPT demonstrating the importance of UI in making AI accessible, the AI stack will be substantially different from our current enterprise stack, creating new opportunities for startups. In addition, because of the larger societal awareness, we are finding large enterprise customers are not only curious, they are willing to pay for solutions that will better automate their workflows creating more consistent and intelligent experiences.

Data and workflow is the key to building a durable advantage

In a room full of founders and engineers, a big discussion point was how to deploy LLMs. Does a company run their own testing with their own data, or leverage existing data and models? The group conferred that it is critical to own the data and testing if possible, as even on a small training set, there are a lot of use cases that are viable. Though, as you build, you have to figure out — what’s the feedback loop that you own and can optimize for? Companies that can demonstrate an improvement to model performance based on a workflow that is fine tuned will reach escape velocity. The power of AI, especially generative AI, is unlocked in verticalized and use case specific applications. We also believe that this makes innovation in the software stack around AI particularly interesting. When developers are flooding into a space with real enterprise customer interest, there are product opportunities to simplify existing, complicated workflows for customers with significant budget.

Where we see opportunity…

  1. Discovery of the unknown unknowns through ML: Our original investment thesis in AI and ML involved giving organizations the ability to live in a data first world. In a data scarce world, we would apply the scientific method to discovery by starting off with a hypothesis. In a data rich world, data will tell you where to look. We believe that further tooling in this space and verticalized applications of such analytical tools will enable organizations to leverage the mountains of data they are collecting. This investment thesis only feels more relevant today.
  2. Applied AI and ML apps: AI and ML apps that are tightly integrated into existing infrastructure and the workflow of an organization enable enterprises to reduce their dependence on brittle rules engines to address every corner case. The ability to answer business and product questions like, “which loan application should I look at first based on my underwriting” or “how should I respond to customer service requests without resorting to refunds while maintaining satisfaction ratings’’ or “what medical codes are missing from this health record” will create leverage for the companies that automate these workflows. We’re interested in applied AI/ML applications that take advantage of domain-specific data sets to exhibit better performance & accuracy, faster training and inference, and greater customizability.
  3. Privacy and control over original data and security of foundation models: As companies leverage consumer data to improve their models, consumers are likely to want ways of protecting their data. For some, like artists, their livelihood may depend on their ability to protect their creations or monetize on variations that build upon their work. Many will want the ability to take advantage of AI and ML capabilities while maintaining control over their data. It feels fairly clear that bad actors will want to manipulate these algorithms as they become more widespread. Security in this context will be incredibly complex and critical. Finally, as we improve or adjust our work based on AI, we may require an accounting of how something was created. Providing tooling for privacy, control of data, and auditing capabilities will be critical in the near future.

As the underlying technology continues its steady march of exponential improvement, we welcome the opportunity to learn in public about where the opportunities lie. We regularly host dinners and small group discussions, so if you have thoughts to share and would like to chat with our team, let us know HERE.

And once again, thank you to Open AI for including and empowering our founders to leverage this rare moment to innovate. As we sit at the precipice of such a significant paradigm shift, we believe startups hold the power to write AI’s next chapter. So, which piece of the stack are you taking on?

--

--