Systematised ‘quant’ venture in the sciences.

Mark Hammond
Deep Science Ventures
8 min readNov 8, 2018

--

Imagine an approach to venture in which there are models that can predict how likely a given strategy, or venture, is to achieve the desired outcome. This isn’t magic, it is how a lot of PE and hedge funds work. In venture it’s the holy grail, yet it sits in complete contrast the real world of thousands of pitch decks, power law returns, outcomes varying from lower to upper quartile by vintage, persisting bias (female founders still only make up 2% of deals despite better deal performance) and the fact that a large part of success appears to be largely a factor of initial luck and compounding returns.

A couple of weeks ago, Ryan Caldbeck published an excellent thread on this on twitter and Max Niederhofer followed up with insight on what has and hasn’t worked at Sunstone so far. This is something we’ve been thinking about for nearly 10 years. The reason is that, in science venture, it’s so incredibly evident that most people are neither working on the optimum approach to a given problem, working on the most pressing problems in their applied fields nor in an optimum team.

As such, I thought it might be interesting to kick off a multi-part series on the state of the art in quant-venture within tech and the journey we’re on at DSV towards systematising science venturing.

Early attempts at predictive models in VC

Early efforts in this area fall broadly into two categories.

  1. Firstly, what we’ll call “company metrics approaches”. Usually investing at seed or series A, aggregating signals from publically available metrics such as website popularity, social media engagement and funding trajectory.
  2. Secondly, what we’ll call “talent-focused metrics approaches”. Usually investing from pre-seed to growth stage, focusing primarily on track record and other predictive features of individual founders.

Company metrics approaches

Talent focused metrics approaches

What’s next in quant VC?

Can we look to hedge funds for inspiration?

Compared to the hedge fund world, all of these approaches seem somewhat tame. Hedge funds look for “Alpha” or: proprietary data that technically isn’t proprietary (as regulations state that everyone must have access to the same information). This so-called ‘alternative data’ ranges from satellite data of cars in a shop’s car park, geolocation data (yep that’s your data from those free products), sentiment and voice patterns in earnings calls, credit card transactions, product price fluctuations, political sentiment, company insider interviews and hundreds of other factors. Numerai, in particular, is doing some really interesting work crowdsourcing these insights and algorithms.

The data scarcity challenge

However, there’s one big gotcha. To identify any sort of relationship you need inputs (features of the company) and outputs (financial results). The inputs are relatively hidden in public markets but the outputs (stock price) are visible and occur thousands of times a day.

In private markets, both are obscured. On the input side, similarly to public markets, the question becomes one of either identifying secondary signals such as packaging design, employee retention, social engagement etc. or providing sufficient value that this data comes to you. On the outputs side realisation events occur so infrequently that it becomes essential to correlate to meaningful intermediary events such as deals, growth and potentially funding rounds (although sceptical of the auto-correlation there).

Increasingly sector focused approaches

There’s little question that there are still ample ways to discover great companies ahead of the pack and to predict and improve performance. The challenge is that to work this has to come from a perspective of deep understanding within a given vertical to quantify both the heuristics that matter, and which interim (pre-exit) end-points to correlate against.

Ryan at CircleUp makes the point that this works well within consumer food products because the business model is largely the same, the data is abundant, literally on every corner, and the outcomes are fairly regular (PE buys early, shop purchasing decisions are visible). However what if you want to focus on enterprise SaaS broadly, those buyers vary considerably, ‘applied AI’ even more so.

Other nascent markets don’t really have any benchmarks yet, e.g. crypto or computational biology, although they definitely have correlates. One approach could be to look at markets that are very much quantifiable at the public level and extend those features back. Examples with well-understood dynamics include energy, fashion, music, real estate, commodities, financial services and travel. I’m not saying that any of those are great for venture, however.

Science venture just might be one of those niches

Science venture, whether life-science, computational hardware, materials or other areas, exhibit many of the features that could support a quantified approach.

Quantifiable market risk

Business models are largely the same across a buyer group and limited in number (e.g. pharma, chemical companies, electronic goods companies), they are also unusually stable for an extended period of time, and it is possible to unpick repeatable drivers from first-hand conversations with buyers and historic outcomes.

More importantly, there are very clear, almost binary, boundaries where if certain performance criteria are met most customers will adopt according to well characterised early-late adopter profiles. This is vastly different from the competitive and dynamic markers in consumer or enterprise. For example, the power density required for an automotive manufacturer to adopt a new battery chemistry, the price point for a chemical company to switch processes, or the in vivo data that would convince a pharma company of a drug’s efficacy.

Technical risk can also be quantified

Whilst technology risk in ‘tech’ is close to zero, in contrast in science venture it is a principal risk for failure. Firstly, we must separate fundamental discovery from engineering science (in any domain, not just engineering). The majority of Fundamental discoveries arise serendipitously and not from programmes that set out to find them. If the science or any part of it hasn’t been achieved before, this is almost impossible to quantify or predict but often takes more than 30 years to reach market readiness (scalability, reliability, safety).

Unfortunately, many science ventures are still in the discovery phase post-funding because research with a commercial direction is so poorly catered for within the research funding ecosystem. However, science venture done right is closer to reconstituting existing, proven science into a usable product.

Technical engineering risk is largely quantifiable right through from the first principles of a system (physics, chemistry) to high-level experiments. Note quantifiable, not non-existent. Take for example the probability of being able to modify a cell in a certain way: one can consider whether it has been achieved in a similar cell, how frequently this has been demonstrated, how trusted or respected the labs which have developed precedents are, which factors made a difference to repeatability in those experiments, has it been demonstrated within a realistic model or better in humans?

Similarly for a materials scale up play, leveraging some sort of automation, we can assess the similarities to other materials that have and haven’t scaled, the fundamental chemical and physical principles that affect the system and whether introducing automation fundamentally changes this or simply increase speed. The confidence horizon is quite clearly delineated by the presence of existing information.

The really exciting thing about a more quantified approach is that it points to the possibility of a definable and addressable ‘Holy Grail’ in a given area. That is a particular market and technical approach which if achieved should long dominate a sector. This potential to find the optimum team, product and approach is incredibly exciting to us as we believe that a key hindrance to progress is that far too many brilliant people are working incredibly hard in local optima rather than taking the most effective route to impact.

If science venture is so quantifiable why isn’t it standard approach to building and investing?

All of the factors described above should be considered in forming the venture in the first place and a solid DD processes for investors. However, in reality, that’s rarely the case: both processes are typically a journey of seeking confirmation on an idea or technology rather than identifying the global optimum amongst the space of potential market and technical approaches.

My guess would be that this is because a) Science venture has evolved from a tech-push approach. I.e. not a focus on the opportunity but commercialisation of a given, very specific, research-derived technology. This is changing but it’s still more about the ideas of venture partners than a systematic discovery process. b) The key market criteria are surprisingly hard to identify ahead of already having the bias of a given approach due to difficulty in accessing R&D heavy industries c) Technical risks are also tough to identify with low reproducibility of papers and deliberately obscure methods sections. However, given the potential impact on the upside in taking a more quantified approach, isn’t it at least worth trying?

As ever I would love to bounce ideas around with anyone else thinking about the area, feel free to drop me a line on mark@dsv.io

--

--

Mark Hammond
Deep Science Ventures

Founder at @deepsciventures creating a new paradigm for applied science. Ex-neuropharmacologist & AI researcher.