3 trends to watch in Computer Vision
For quite some time now, we’ve been tracking applied AI companies (I wrote this post about our investment thesis some time ago) especially in the Computer Vision space. This has led us to invest in a few companies like SuperAnnotate (why we invested here) or Intenseye, which we announced 3 weeks ago (Ricardo tells more there).
SuperAnnotate is building a core piece of infrastructure for any company building computer vision software, which puts them at the centre of the broader CV ecosystem. They educate some new joiners (see Vahan’s class on OpenCV here) but they are also equipping some more advanced CV companies like Intellinair.
Last week, I sat down with Vahan to discuss trends they’re currently observing in the ecosystem. We thought we’d share 3 key ones we, at both Point Nine and SuperAnnotate, are very interested in. If you’d like to discuss them further or are building in these spaces, please feel free to reach out to me or to Vahan, we’d love to chat!
1. Building reliable earth observation using Synthetic Aperture Radar (SAR) Imagery
It’s now easier and cheaper than ever to access satellite images and to build specialized software on top of it for many industries. If you’re curious, you should check this (awesome) website (keeptrack.space) that tracks satellites, their ownership and their position. Established companies such as Airbus, Thales, NASA or the ESA are all selling data and a few companies have emerged analysing them for industry-specific use cases. Some of the interesting players that emerged are companies such as Orbital Insights in the US or Preligens in France.
That being said, so far, most of them rely on satellite imagery leveraging passive sensors such as electro-optical sensors or cameras that have one major pitfall: they’re weather dependant (the night and a cloud can make the monitoring unreliable).
A new generation of companies now leverages SAR (Synthetic Aperture Radar) imagery. I won’t go into too many details here but in short, the idea is to use moving, active sensors like RADAR that actively emits waves with a longer wavelength than the light in order to build reliable images during the night and no matter the weather conditions. This allows for the creation of images like this:
If you want to learn more about how it works, Capella Space, one of the pioneers in this space explains more in this post called SAR 101. There are now 3 significant companies operating satellites in order to create use cases and resell SAR data: Capella Space in the US (raised $82M since 2017), Synspective (raised ca $10M since 2018) in Japan or Iceye (raised $150M since 2015).
The fascinating thing is that now that these SAR images are available and soon abundant, with an increasingly lower resolution (soon the size of a human face), value can be created on top, opening a myriad of new industry-specific use cases to be built. If you’re building in this space, please reach out, we’d both love to learn more!
2. Building CV models and overfitting them to a specific camera/scenery
One of the main challenges for autonomous driving companies is to build very large generalized models that will perform well in different locations, countries, weather conditions etc. While building such models required millions of high quality annotated images, the reality is that in some cases there is no need to build such generic models.
There is an increasing number of CV problems where one needs to gather intelligence from fixed positioned, or slightly changing, locations. Traffic management analysis is a perfect example for such applications where each camera is fixed to a certain crossroad or intersection. Another use case would be in the drone delivery space, where each drone is limited to deliver to a certain small neighbourhood.
Deep learning-based CV models tend to learn and quickly overfit to a particular scenery. If your camera is fixed, you would rather build slightly different models for each camera and overfit your CV model for each camera. Although overfitting in machine learning has negative connotations, in such cases you would create far more accurate models for each fixed camera, rather than building a generic model for all cameras.
The challenge with creating a CV model for each camera is to build a scalable pipeline where a new model can be fine-tuned within (a) day(s) rather than in months. If that happens, you’ll most likely want to automate your CV pipeline from initial data labelling to pushing your model in production — Vahan can help you do that and I am happy to discuss an investment because this is probably the right way to solve your challenge ;).
3. Using Computer Vision to go after old industries.
In 2017, I looked at some of the most well funded Applied AI companies and found out that 65% of them (and 89% of the total amount of funds raised) were either in Finance, Sales & Marketing, Healthcare, Transportation or Cybersecurity.
What we’re currently observing is that applied AI companies are now entering rather old industries like Agriculture or Health and Safety Compliance. Our recently announced investment Intenseye is a great example. I have also always been fascinated by what Yasir and Saad are building at Connecterra with their next-gen, AI-powered, farm management system for dairy farms. Intellinair is another very good example in the agriculture space. They’ve built dozens of models to help farmers monitor their crops.
Like we do when assessing B2B marketplaces, we tend to think that a key success factor to disrupt old industries is to have a combination of tech background and a great understanding of the dynamics at play within these old industries. These are often tough to disrupt when you’re a complete outsider. Such founders can be really hard to find but we’re very convinced that they have a high potential to build successful companies. If you’re one of them, please reach out, we’d love to learn more about your industry!