Meet SLAMcore’s CEO, Owen Nicholson

SLAMcore co-founder and CEO, Owen Nicholson

Any platform that needs to operate in the environment autonomously, whether it’s a drone, car, or virtual reality (VR) or augmented reality (AR) headset, faces a universal challenge: how to accurately map and orient itself in an unknown location. Enter our portfolio company, SLAMcore.

The SLAM in their name stands for “Simultaneous Localization and Mapping,” and they are developing a robust solution that enables inanimate objects to have spatial awareness. SLAMcore’s spatial artificial intelligence (AI) technology uses algorithms that fuse information from multiple sensors into a single version of the truth, while reducing power consumption, to allow autonomous systems to move through the world efficiently.

Based in the United Kingdom, SLAMcore got its start at Imperial College London, and spun out of the university in early 2016. In this interview, the company’s co-founder and CEO Owen Nicholson talks about their technology, what it’s like to move from academia to the startup world, and the radical possibilities of their spatial AI technology.

Tell us more about the problem that SLAMcore is addressing, and how are you going about solving it.

SLAMcore designs and optimises algorithms that allow VR/AR headsets, drones, robots or autonomous cars to accurately calculate their position in 3D space, and make sense of the world as they move through it. We call it “spatial AI.”

SLAMcore’s specialty is our ability to build algorithms that run on low-power, low-cost hardware, whilst still delivering the robust performance required. Current solutions are very temperamental and can easily result in the system getting “lost” if it moves too fast, or the environmental conditions are not quite right. This can result in a drone crashing, a car not seeing a cyclist with the sun behind them, or a VR/AR experience being shattered as the virtual objects stop responding to your movements.

SLAM is the “chicken or egg” problem faced by any platform with onboard sensors that is placed inside a space it has never been in before, and is asked to perform some sort of task. The first thing this platform needs to do is figure out where it is within the space, and how far away the objects are from it. Ideally, it would have an accurate map and the system could compare what it currently sees to this map in order to estimate its position. The problem is that it has never been here before, so no such map exists and the only option is to build one on the fly. A SLAM system takes the information from sensors onboard this platform and attempts to build a map whilst, at the same time, estimating its position within it. It does this by essentially making guesses and keeping a close eye on the level of confidence it has for each measurement. As the platform moves around the environment, it will gather more and more data about the position of things as they move relative to each other. This new information is added to the map and, over time, the uncertainty of location of the elements of the map will decrease.

Let’s dive into the technology. Can you explain visual SLAM, and how SLAMcore’s approach is unique?

Visual SLAM is the process of solving the SLAM problem by using a camera or camera-like sensor. You can build a 3D map of the world with just a single camera. It may sound odd, but if you think about it, we are able to do a pretty good job of estimating the position of things with one eye closed. This is because we make use of the fundamental mathematical principle of parallax.

A visual SLAM system starts with the first frame from a camera. It analyses this 2D image to identify small areas of high contrast, or corners of objects, known as “features.” The camera then moves and captures another frame. Features are identified again and compared to the previous frame to see which features have moved. Using those principles of parallax and triangulation, we can start to estimate where these features actually are with respect to the camera, and how much that camera has moved. The more the camera moves, the more measurements it has to work with which increases the confidence that estimates are correct. This approach was actually first shown by Professor Davison back in 2003 when he released a system called MONOSLAM, which was the first ever, real-time, single-camera SLAM system.

One of the most interesting advances in visual SLAM came last year when the SLAMcore team built the first ever 3D, real-time SLAM system, which used a novel sensor called the Event Camera. This sensor works more like a biological eye, and is often referred to as the silicon retina. Rather than creating a sequence of video frames, each pixel is independent and only transmits information if it detects a change in light intensity. This new sensor continues to work where standard cameras fail, in spaces with extreme lighting conditions, and fast, aggressive motions.

How can SLAMcore’s technology help in everyday life?

SLAM algorithms exist in pretty much all autonomous drones and cars, robots, and VR/AR systems. They are responsible for telling these platforms where they are as the move through space — without them, the systems would be incapable of performing any sort of task that requires spatial perception. For example, drones with poor SLAM algorithms will be unable to fly aggressively, or when lighting conditions are not perfect. VR or AR headsets with poor SLAM algorithms will only work if you move your head slowly, or have very controlled lighting conditions. Autonomous cars with poor SLAM systems will eventually fail, resulting in the car being unsure of where it is relative to the world around it.

Our focus is on delivering solutions that have been optimised to be as robust as possible. By empowering these inanimate objects with spatial awareness, we can make sure that these platforms will be less prone to getting lost — and we can greatly improve the user experience.

Your co-founder, Professor Andrew Davison, wrote a great blog post where he talked about three levels of SLAM. Will you explain the significance of these different levels, and what it means for the work you’re doing at SLAMcore?

The ultimate SLAM solution knows exactly where the platform is, where the objects are and, importantly, what the objects around it actually are. This is the highest level of spatial intelligence and the key to unlocking some truly world-changing products. You can not just jump straight to that point. At SLAMcore, we have thought long and hard about the steps you need to take to achieve this advanced level. We have identified three key features of the ultimate SLAM system, each of which are required to unlock the next:

  • A Level One SLAM system is one that is simply able to calculate its position relative to a generic coordinate system. The output of a Level One system will be six numbers (x, y and z coordinates and the rotation angle around each of these axis). This is the fundamental foundation of all SLAM systems.
  • A Level 2 SLAM system outputs that six degrees of freedom number from Level One, but this time it also outputs a dense, surface map of the world. This map is a geometrically accurate representation of the space around the platform and can enable a much richer product experience.
  • A Level 3 SLAM system is where things get really interesting. Up to this point, we know where the platform and surface geometry of the world are, but the system has no real understanding of what is actually present in the environment. Level 3 SLAM systems introduce this concept of semantic understanding, where the map that is created from the Level 2 system actually has meaning. Instead of just a surface mesh, the individual objects are segmented from each other and the system now knows what is the floor, what is the wall, what is the person, what is the road. This now enables the richest of product experience.

What is your dream application for SLAM technology? What are some of the possible uses?

This is a really hard question. It’s like asking what is the dream application of an artist’s paintbrush or a surgeon’s scalpel. A SLAM solution is a tool that provides a general capability to allow physical objects to position themselves, navigate and interact with the real world. This has so many potential applications that it would be impossible to guess at this stage the ones that will have the biggest impact.

Drones for delivery, autonomous cars for taxis, VR for work meetings, AR glasses instead of mobile phones or laptops — these are all incredibly exciting examples of SLAM-enabled products, but they are only really replacing existing services. I firmly believe in Amara’s law which states, “we tend to overestimate the effect of a technology in the short run, and underestimate the effect in the long run.” The real SLAM-enabled applications that will change the world are ones that I don’t think we have even thought of yet. How exciting is that?

SLAMcore is a spin out from Imperial College London. Do you continue to partner with Imperial College in any way?

Yes, it was at Imperial College that I met the SLAMcore founding team. We were all either working alongside or supervised by Professor Davison, who leads Imperial’s Robot Vision Research Group. We still are very close to Imperial College, and have a great relationship. They have equity in the company and a seat on the board. Two members of the founding team continue their roles as lecturers at Imperial College, so it was really important from the start that we took the university along with us in this venture.

What advice would you give to others who are working on technology within a university setting and want to launch a startup to commercialize their research?

If I were to try and succinctly summarise what I have learned from working and negotiating with many universities from around the world, I would highlight the following three points:

  1. Make sure your technology is actually solving a problem. And, if it is, does it solve the problem better, cheaper, or both, than the alternatives that are on the market or may be in the future? This is so often overlooked by university startups.
  2. The amount of work required to take an academic concept through to a commercial reality is huge, and quite often vastly underestimated. The impact of this is, the university should be realistic about how valuable the intellectual property (IP) actually is, if there is any, and the company should be realistic about how long it will take and how much it will cost to hit their milestones.
  3. Finally, I would really encourage any potential university startups to not ignore the IP situation. Seek advice as early as you can. You can get a lot of free advice from lawyers…but the best advice will come from people who have done this before, and have the scars to prove it. IP can get very messy, very quickly so it should not be ignored. Quite often a concept core to the business is technically the IP of the universities — and if you do not address access rights early on, you will at best scare off future investors or, at worse, discover you are unable to go to market without breaking the law. Remember, when it comes to agreeing on numbers, it is a negotiation and there are good resources out there where you can see what the market norms are. Don’t be a pushover, but bear in mind you are going to have to work with them after the negotiations are complete.

What’s your next move? What are some “big picture” problems that SLAMcore is tackling?

Many people in the tech world believe that it’s just a matter of waiting for a few clicks of Moore’s Law, and AR/VR, robotics and autonomous vehicles will be ready for the mass market. Unfortunately, that’s just not the case. To truly deliver the most robust experience on hardware, that is both low cost and low power, we need to look at how the hardware and algorithms work together and design all elements as one.

Future SLAM systems will bring the sensor, processor and algorithm ever closer together. We are currently building systems with hardware that was designed for completely different jobs. The camera in your phone has not been optimised for SLAM, but to give you the best possible selfie. The CPU is purposely designed for general processing and the GPU has been heavily optimised for graphics. We are essentially using the wrong tools for the job.

At SLAMcore, we are actively pursuing this vision and are looking to work closely with both sensor and processor companies to build the next-generation SLAM systems for spatial AI. That’s the key to delivering 24-hour, lightweight AR glasses that never get lost. It’s the key to drones that navigate at incredible speeds, or robots that can operate with 100% reliability in all environments.

There’s a huge amount of work to do, and we believe strongly that it is not a challenge that a single company can solve, no matter how deep their pockets. The best solutions will come from a number of different companies working closely together. Our investors, Toyota AI Ventures, have already been a real asset in helping to identify new partners and navigate a complex supply chain. We are building a company focused on delivering the best possible SLAM algorithm. By working with others who are experts in their respective fields, together, we can start to enable products that will truly change the way in which we live our lives.