From high throughput screening to AI: the quest to transform pharmaceutical R&D

Benjamin Belot
Kurma Partners
Published in
4 min readJul 2, 2024

In a data-driven world obsessed with productivity and efficiency, it makes sense for the pharmaceutical industry to consider the use of artificial intelligence, particularly in R&D.

Having been in the venture capital industry for a couple of decades though, I’ve experienced several innovation waves, bringing the same kind of excitement that AI brings today. Attempting to use this experience to wisely invest with our innate (and certainly not artificial) intelligence, I decided to look back to the early days of my career when genomics, combinatorial chemistry, and high throughput screening were expected to revolutionize new molecule discovery through rationalization.

Until the 1980s, most drug candidates were small chemistry-based molecules. The discovery effort was then manageable for a pharmaceutical group. Advancements in biology came enriching the therapeutic arsenal with more complex approaches and molecules, such as monoclonal antibodies, gene therapy, cell therapy, and nucleic acids.

Each of these categories could be further divided into sub-categories. For instance, monoclonal antibodies can be conjugated to toxic molecules or be bispecific. The same applies to gene therapy, which can be permanent, transient, etc.

Furthermore, production methods are unique to each type of molecule and require significant investments. What was exclusively the domain of chemistry in the 20th century grew to also involve biochemistry or genetic engineering, requiring substantial dedicated investments.

Reactors used for the production of monoclonal antibodies

Meanwhile, the threat of blockbuster patent cliffs (i.e. expiration of patent for molecules generating an annual turnover of more than $1 billion) has been looming on the industry.

With the above in mind, pharmaceutical groups were forced to make decisions to avoid diluting R&D efforts. This led the industry to gradually focus on development and to move further from pure research. A popular option being to acquire innovative companies that discovered molecules of interest, especially those at relatively advanced clinical development stages

At the end of the last millennium, many emerging biotech companies promised to speed up the discovery of drug candidates, using technologies such as high throughput screening, combinatorial chemistry or gene editing. Three business models coexisted: (i) discovering drug candidates for their own account, then developing them in house or licensing them to the pharmaceutical industry for staggered remuneration, (ii) providing high value drug discovery/development services, (iii) a hybrid model between (i) and (ii).

Robot used for high throughput screening

I preferred the first model back then. My first ever venture was a seed investment in Alantos in 2000, which was eventually acquired by Amgen in 2007 for $300M. Through dynamic combinatorial chemistry invented by Pr Jean-Marie Lehn (Nobel Prize in Chemistry in 1987), Alantos discovered a DDP IV (an enzyme playing a key role in glucose metabolism) inhibitor that was in preclinical development at the time of acquisition. To this day though, I struggle to recall a company of this type transforming into a fully integrated pharmaceutical research and development company (there might be a few of course).

Jean-Marie Lehn, recipient of the Nobel Prize in Chemistry in 1987 together with Donald Cram and Charles Pedersen

AI-first companies specializing in molecule discovery face a similar choice: either providing services to the industry or developing a proprietary pipeline. The profiles and returns for an investor are very different in each case.

The service model is less likely to generate high returns but doesn’t require extensive capital resources. On the other hand, developing a pipeline of proprietary products, even at an early stage, while perfecting the tech is extremely capital intensive. And the chances of discovering a blockbuster are slim, hopefully less so with the use of AI.

AI-driven drug discovery companies out there seem to be leaning towards the second option, taking the challenge! Not a week goes by without the release of a new (multimodal) LLM for biology, from AlphaFold 3 (Isomorphic Lab) to ESM3 (EvolutionaryScale) to Tx-LLM (Google Deepmind). But the sheer complexity of biology is unmatched and the data available to train model is scarce, as opposed to what can be done in natural language.

AlphaFold 3 predicts the structure of proteins, DNA, RNA, ligands…and how they interact

Keeping this complexity in mind, it is difficult to believe that artificial intelligence can address all the challenges of drug discovery & development at once. Perhaps the question we should ask ourselves is where in the R&D pipeline (target identification, hit-to-lead, toxicology…) do we expect AI to have the biggest impact. And focus on extracting the most value possible there.

Written by Philippe Peltier with contribution from Benjamin Belot

--

--

Benjamin Belot
Kurma Partners

Partner at Kurma Partners, investing in early-stage healthtech & techbio across Europe. Passionate about healthcare, geeky about music, emotional about football