AI in 2023: When six months feels like six years…

It’s been an exciting 6 months, not just for AI but for technology as a whole. Many hypothesised as to what the impact of LLMs could be and week-by-week the reality unfolded, moving much quicker than anyone could imagine. We’re really witnessing the power of compounded exponential technology development.

Sivesh Sukumar
Balderton
5 min readJul 5, 2023

--

Last year we published a three part series on ideas we’d built conviction in:

Despite a lot of these ideas being more mainstream now — they were fairly contrarian at the time due to a lot of unanswered questions. Even the idea of building a product on 3rd party ML models was contrarian back in October, but we’ve since seen time and time again how powerful this can be.

At Balderton, we like to keep ourselves accountable so we thought it’d be fun to break down what we got right (and not so right) below…

Marking our 2023 predictions…

AI-native SaaS

Thesis — Opportunity in building productivity tools from the ground up with LLMs. UX becomes the bottleneck for AI and value accrues in helping users extract from models rather than the actual models themselves.

✅ The barrier to entry for NLP tasks has significantly lowered. Incumbents built around proprietary NLP are at risk. There are a wide range of AI-native apps building on APIs and solving problems in UX rather than ML, creating a huge amount of value. Many businesses with explosive commercial traction by leveraging the “wow” moment to convert users to paying customers.

Incumbents are moving much quicker into the space than expected. The route to defensibility via a “data flywheel” is still largely unproven but we are seeing the best teams finding profitable niches.

Examples: PhotoRoom, Tome, Supernormal, Jasper, ChatGPT, Casetext, AdCreative, AutogenAI

Neural search

Thesis — Vector search matures and becomes the go-to method to store and manage unstructured data. There is a rise in vector databases, neural frameworks, and an “unbundling of Google” via verticalized search applications.

Adoption of vector databases has exploded along with large fundraises/valuations. The explosion of LangChain and Haystack hits an inflection point. Retrieval Augmented Generation (RAG) is the most widespread use case for LLMs in production. Success is seen in vertical search applications with LLM-powered search at the core.

❓ The defensibility of vector databases is unclear — does it just become a feature? Vector search isn’t perfect — blind to terms/concepts outside embedding model’s training context unless fine-tuned. Struggles with exact matches.

Examples: Haystack (Deepset), LangChain, Weaviate, Harvey, Qdrant, Pinecone, Causaly, Bloop, RobinAI

Actionable LLMs

Thesis — Focus on models to generate actions rather than models which generate content i.e. Deep Reinforcement Learning. AI-powered actionable assistants — “Copilot for X.”

✅ Autonomous Agents are the hottest topic in AI (more on this later). Enterprise focus is largely on using LLMs for automation.

Thought Reinforcement Learning was the answer — off-the-shelf language models + tools were sufficient. Lack of these use cases in production.

Examples: Adept, Harvey, AutoGPT, Haystack (Deepset)

AI-native Infrastructure

Thesis — A new wave of tooling and infra emerges to help developers build and maintain applications leveraging LLMs and other foundation models.

✅ Most enterprise value is still accruing in the infrastructure layer. Neural Search infra and connecting LLMs with external data sources is the most exciting area of infra. The real winners as of today are compute/cloud providers.

❓ Prompt evaluation and optimization are more interesting than initially thought due to the shift from GUI to chat interfaces. The shift to DIY stack and leveraging RLHF is more difficult than initially thought (QLoRA is making this easier).

Examples: HuggingFace, Haystack (Deepset), Humanloop, Context, Qdrant, Pinecone, TitanML

Diffusion models

Thesis — Businesses leveraging diffusion models are more interesting than transformer-centric businesses as less proven (more contrarian) and higher ability to “own the stack” due to smaller ~1B parameter models. Models are great for image generation but there are signs of success in other modals such as video, audio and physics.

✅ Diffusion models in audio and video are seeing success. Lots of startups “owning the stack” and fine-tuning models due to smaller 1B models. The pace of research in this vertical has stepped up.

❓ Overestimated TAM for diffusion model use cases — will always be a fraction of Language use cases (B2B runs on language).

Examples: Elevenlabs, Layer, PhotoRoom, RunwayML, Scenario, Synthesia

The above theses are far from being proven and even further from being mature. However, as investors, we need to keep iterating on what we think might happen next. Some other areas in AI we’re excited by are… (more content to come!)

What’s next?

Autonomous Agents

Agents use LLMs as the “reasoning engine” — they’re a deterministic sequence of actions and a combination of LLMs, external tools (e.g. search, APIs, databases, calculators and running code) and a recursive feedback loop. There are various limitations including high compute costs, difficulty choosing tools, and a lack of reusable memory. However, the main challenge for widespread adoption, especially in enterprise, is the transition from deterministic to probabilistic software.

We see some white space opportunities in “Infra for Agents”, such as indexing engines for tool selection, guardrails for what agents can do and simulation engines to help weigh up the risk / reward of probabilistic software. There are also some obvious use-cases for Agents such as B2B brokerage, marketplaces and building code.

Rise of Open Source Models

It’s hard to ignore what’s going on in the open source AI ecosystem right now. The velocity of development far outpaces what’s happening in closed research labs. “Google has no moat, and neither does OpenAI”. It’s clear that AI is more about research and hyper-optimisation (a.k.a. trial and error) rather than sheer scale. The open source community is much better at this vs corporate research labs.

The barrier to entry for fine-tuning has also been vastly lowered by LoRA, opening up a world of opportunity.

Computer Vision is having its “GPT-3” moment

FAIR recently released SAM (Segment Anything Model), which is pre-trained foundation model for computer vision. The ability for this model to solve generalised tasks off-the-shelf is a glimpse that computer vision might be undergoing a similar shift to NLP.

Chat Interface With Data

Vector search has reinvented how we interact with unstructured data. Text-to-SQL has reinvented how we interact with structured data. Toolformer has reinvented how we interact with APIs. “The Last Mile of Analytics” has been one of the largest opportunities in the modern data stack and AI could be the answer we’ve all been waiting for.

Final thoughts

At Balderton, we believe that AI is one of the most exciting areas in technology today. We’ve invested in AI companies in areas as diverse as bioinformatics through to autonomous vehicles — and we’ve backed many GenAI-native companies in the past 12 months such as Levity, PhotoRoom, Supernormal and more that are unannounced. If you’re building in the space, feel free to reach out to us at jwise@balderton.com and ssukumar@balderton.com

--

--

Responses (1)