Systematised ‘quant’ venture in the sciences.
Imagine an approach to venture in which there are models that can predict how likely a given strategy, or venture, is to achieve the desired outcome. This isn’t magic, it is how a lot of PE and hedge funds work. In venture it’s the holy grail, yet it sits in complete contrast the real world of thousands of pitch decks, power law returns, outcomes varying from lower to upper quartile by vintage, persisting bias (female founders still only make up 2% of deals despite better deal performance) and the fact that a large part of success appears to be largely a factor of initial luck and compounding returns.
A couple of weeks ago, Ryan Caldbeck published an excellent thread on this on twitter and Max Niederhofer followed up with insight on what has and hasn’t worked at Sunstone so far. This is something we’ve been thinking about for nearly 10 years. The reason is that, in science venture, it’s so incredibly evident that most people are neither working on the optimum approach to a given problem, working on the most pressing problems in their applied fields nor in an optimum team.
As such, I thought it might be interesting to kick off a multi-part series on the state of the art in quant-venture within tech and the journey we’re on at DSV towards systematising science venturing.
Early attempts at predictive models in VC
Early efforts in this area fall broadly into two categories.
- Firstly, what we’ll call “company metrics approaches”. Usually investing at seed or series A, aggregating signals from publically available metrics such as website popularity, social media engagement and funding trajectory.
- Secondly, what we’ll call “talent-focused metrics approaches”. Usually investing from pre-seed to growth stage, focusing primarily on track record and other predictive features of individual founders.
Company metrics approaches
- The old guard has all used some form of quantification for years: Greylock and Kleiner Perkins look for traction in App store engagements and twitter influencer mentions. Floodgate and DFJ look to identify patterns in their own successes from the past. SignalFire also claims to be using similar metrics, although given the claimed £5m to build the platform one would imagine that there’s a little more to it.
- EQT: Have built a PR friendly algorithm called the Motherbrain. Uses funding, web ranking, social media activity and the temporal pattern in a company’s key metrics, combined via both unsupervised and supervised machine learning. I wasn’t able to find any public information on the results of this approach.
- Google Ventures: Claims to have long used an algorithmic approach, including academic literature, past experience and due diligence on companies and founders, but doesn’t make anything very insightful public other than to say that no single metrics is significant, and that the secret sauce is in both the data integration and sense checking with human intuition and chemistry.
- Social Capital: Launched the Capital as a Service platform which automatically analyses founder completed surveys rather than pitches. Difficult to tease apart results from their other funds but overall their results look impressive.
- CircleUp looks for indicators within consumer goods markets that form quantifiable forms of human heuristics such as quantifying the shift towards clear packaging using computer vision. Fantastic talk breaking down why a quantified approach works for this vertical with Ryan on Invest like the Best.
- InReach ventures: Looks at hiring trajectory, traffic on the website and the problem being solved. They claim some early successes in terms of uncovering hidden gems and exits.
- Ironstone: Takes a completely opposing view to most in that it prioritises the predictive efficacy of the market over founders. No public data on results. Personally, think that is a really interesting and underexplored area and will be covering it in a later post.
- NfX: Don’t bang the quant drum but do seem to be taking a very quantified approach, a good example being their recent form for ranking the potential of marketplace companies.
Talent focused metrics approaches
- Correlation Ventures: Primarily look at the track record of entrepreneurs, investors and advisors under the hypothesis that reputations aren’t random. This is showing some early signs of success with some significant exits. Not too surprising given that it’s essentially scaling the principal of ‘team is the most important thing’ and riding a wave of other smart investors.
- Bloomberg Beta: Did some interesting work on predicting founders before they’ve started anything in which they looked at the career trajectories of founders pre-venture formation and then contacted 350 on similar trajectories. The results aren’t that clear but it looks like most of those people continued being impressive people but didn’t necessarily found anything, let alone have success with it.
- Sunstone: With an investment in Mattermark, Sunstone was primed to look at the data question in VC. Max has written a good summary of their approach here which focused on CV data. Whilst career data appears somewhat predictive, it sounds like it was hindered by data availability, in that most founders don’t put ‘founder’ on their CV and most companies aren’t raising at any given time.
What’s next in quant VC?
Can we look to hedge funds for inspiration?
Compared to the hedge fund world, all of these approaches seem somewhat tame. Hedge funds look for “Alpha” or: proprietary data that technically isn’t proprietary (as regulations state that everyone must have access to the same information). This so-called ‘alternative data’ ranges from satellite data of cars in a shop’s car park, geolocation data (yep that’s your data from those free products), sentiment and voice patterns in earnings calls, credit card transactions, product price fluctuations, political sentiment, company insider interviews and hundreds of other factors. Numerai, in particular, is doing some really interesting work crowdsourcing these insights and algorithms.
The data scarcity challenge
However, there’s one big gotcha. To identify any sort of relationship you need inputs (features of the company) and outputs (financial results). The inputs are relatively hidden in public markets but the outputs (stock price) are visible and occur thousands of times a day.
In private markets, both are obscured. On the input side, similarly to public markets, the question becomes one of either identifying secondary signals such as packaging design, employee retention, social engagement etc. or providing sufficient value that this data comes to you. On the outputs side realisation events occur so infrequently that it becomes essential to correlate to meaningful intermediary events such as deals, growth and potentially funding rounds (although sceptical of the auto-correlation there).
Increasingly sector focused approaches
There’s little question that there are still ample ways to discover great companies ahead of the pack and to predict and improve performance. The challenge is that to work this has to come from a perspective of deep understanding within a given vertical to quantify both the heuristics that matter, and which interim (pre-exit) end-points to correlate against.
Ryan at CircleUp makes the point that this works well within consumer food products because the business model is largely the same, the data is abundant, literally on every corner, and the outcomes are fairly regular (PE buys early, shop purchasing decisions are visible). However what if you want to focus on enterprise SaaS broadly, those buyers vary considerably, ‘applied AI’ even more so.
Other nascent markets don’t really have any benchmarks yet, e.g. crypto or computational biology, although they definitely have correlates. One approach could be to look at markets that are very much quantifiable at the public level and extend those features back. Examples with well-understood dynamics include energy, fashion, music, real estate, commodities, financial services and travel. I’m not saying that any of those are great for venture, however.
Science venture just might be one of those niches
Science venture, whether life-science, computational hardware, materials or other areas, exhibit many of the features that could support a quantified approach.
Quantifiable market risk
Business models are largely the same across a buyer group and limited in number (e.g. pharma, chemical companies, electronic goods companies), they are also unusually stable for an extended period of time, and it is possible to unpick repeatable drivers from first-hand conversations with buyers and historic outcomes.
More importantly, there are very clear, almost binary, boundaries where if certain performance criteria are met most customers will adopt according to well characterised early-late adopter profiles. This is vastly different from the competitive and dynamic markers in consumer or enterprise. For example, the power density required for an automotive manufacturer to adopt a new battery chemistry, the price point for a chemical company to switch processes, or the in vivo data that would convince a pharma company of a drug’s efficacy.
Technical risk can also be quantified
Whilst technology risk in ‘tech’ is close to zero, in contrast in science venture it is a principal risk for failure. Firstly, we must separate fundamental discovery from engineering science (in any domain, not just engineering). The majority of Fundamental discoveries arise serendipitously and not from programmes that set out to find them. If the science or any part of it hasn’t been achieved before, this is almost impossible to quantify or predict but often takes more than 30 years to reach market readiness (scalability, reliability, safety).
Unfortunately, many science ventures are still in the discovery phase post-funding because research with a commercial direction is so poorly catered for within the research funding ecosystem. However, science venture done right is closer to reconstituting existing, proven science into a usable product.
Technical engineering risk is largely quantifiable right through from the first principles of a system (physics, chemistry) to high-level experiments. Note quantifiable, not non-existent. Take for example the probability of being able to modify a cell in a certain way: one can consider whether it has been achieved in a similar cell, how frequently this has been demonstrated, how trusted or respected the labs which have developed precedents are, which factors made a difference to repeatability in those experiments, has it been demonstrated within a realistic model or better in humans?
Similarly for a materials scale up play, leveraging some sort of automation, we can assess the similarities to other materials that have and haven’t scaled, the fundamental chemical and physical principles that affect the system and whether introducing automation fundamentally changes this or simply increase speed. The confidence horizon is quite clearly delineated by the presence of existing information.
The really exciting thing about a more quantified approach is that it points to the possibility of a definable and addressable ‘Holy Grail’ in a given area. That is a particular market and technical approach which if achieved should long dominate a sector. This potential to find the optimum team, product and approach is incredibly exciting to us as we believe that a key hindrance to progress is that far too many brilliant people are working incredibly hard in local optima rather than taking the most effective route to impact.
If science venture is so quantifiable why isn’t it standard approach to building and investing?
All of the factors described above should be considered in forming the venture in the first place and a solid DD processes for investors. However, in reality, that’s rarely the case: both processes are typically a journey of seeking confirmation on an idea or technology rather than identifying the global optimum amongst the space of potential market and technical approaches.
My guess would be that this is because a) Science venture has evolved from a tech-push approach. I.e. not a focus on the opportunity but commercialisation of a given, very specific, research-derived technology. This is changing but it’s still more about the ideas of venture partners than a systematic discovery process. b) The key market criteria are surprisingly hard to identify ahead of already having the bias of a given approach due to difficulty in accessing R&D heavy industries c) Technical risks are also tough to identify with low reproducibility of papers and deliberately obscure methods sections. However, given the potential impact on the upside in taking a more quantified approach, isn’t it at least worth trying?
As ever I would love to bounce ideas around with anyone else thinking about the area, feel free to drop me a line on mark@dsv.io