Vision for the Future

Announcing Scape’s ‘Vision Engine’, a Visual Positioning Service in London, and $8m in seed funding

Scape’s Vision Engine: a large-scale, image-based mapping and localization pipeline

Over the last year, the Scape Technologies team have been working to build a fundamentally new type of infrastructure that will allow computers to understand their environment, using a camera.

Today, we are coming out of stealth and are happy to announce the company has raised $8m in seed funding, with backing from LocalGlobe, Mosaic Ventures, Fly Ventures and Entrepreneur First, which was closed last year.

In this post, we will dive into what makes this infrastructure unique and how it will enable previously impossible applications.

Scape Technologies’ Vision Engine

As explained in our first post, over the next few years, we will all be entering into a new era of ‘spatial computing’. Devices such as augmented reality headsets, autonomous vehicles, and drones, will need to operate safely alongside people and consequently, they will have to understand the physical world with more detail than ever before.

While GPS & electronic compass are incredible technologies, they are not sufficiently accurate to enable the next wave of computing.

That’s why for the last couple of years, we’ve been building a new type of infrastructure — a new category of map designed from the ground up for machines, allowing devices to precisely determine where they are and what’s around them, using only a camera.


In order to cope with the technical demands of this infrastructure, we are announcing today our ‘Vision Engine’ — the technology that makes image-based location recognition possible.

Images go in, maps come out

Fundamentally, the Vision Engine is a world-scale mapping pipeline that processes images and videos, resulting in a machine-readable HD map of the environment, aligned to the physical world. Basically, ‘images go in, maps come out’.

Having built these HD maps, any camera device can then query our Vision Engine, using our ‘Visual Positioning Service’ API, to determine the device’s location with centimetre precision without relying on expensive sensors or components.

Unlike other approaches, Scape’s Vision Engine builds, maintains & references our machine-readable maps in the cloud, allowing us to operate at a previously unprecedented scale.

Making our Visual Positioning Service available in London today

Scape’s Visual Positioning Service API

Today, we are enabling our Visual Positioning Service in London, for AR applications via our mobile SDK ‘ScapeKit’. The SDK is currently available in alpha for iOS, Android and Unity development platforms.

The service has been designed to be:

  • Accurate, determining a device’s location with centimetre-level accuracy
  • Infinitely scalable, so our service can be available across entire cities
  • Fast, returning a device’s location in ~3 seconds on a 3G network
  • Robust to changes in the environment, significantly out-performing previous approaches
  • Geographically-aligned, providing a device’s location in longitude and latitude, just like GPS today
  • Private & secure, ensuring our system is GDPR compliant

We see the service initially being used in three distinctive areas:

Large-Scale Augmented Reality

Scape’s Visual Positioning Service allows AR content to be anchored to the world in precise locations, meaning that one day, games such as Fortnite or Minecraft could be played in the real world, using the environment to influence how the game is played around you.

e-Scooters and Dockless Bikes

Dockless scooter and bikes are becoming increasingly common, resulting in cities littered with vehicles left in inappropriate locations. Using Scape’s Visual Positioning Service, companies can ensure drivers stick to designated driving areas.

Autonomous Vehicles

Today, self-driving cars use LiDAR (which stands for Light Detection and Ranging), to help determine their position in the world. However, LiDAR devices can be large in size, expensive, heavy and power-hungry.

Scape’s Visual Positioning Service eliminates the need for LiDAR, giving the ability to accurately pinpoint a vehicle’s location, using only a single camera. This will be transformative, as it means self-driving capabilities, could soon be available on a significantly larger scale.


Announcing our Seed Funding & Next Steps

With these announcements, we are also excited to reveal that last year, we raised a total of $8m in seed funding, with participation from LocalGlobe, Mosaic Ventures, Fly Ventures and Entrepreneur First.

This funding has allowed us to grow to a team of 30 people from our offices in London and focus on solving the fundamental infrastructure needed to enable the next era of spatial computing.

As we work to test our visual positioning service with AR and mobility companies within London, we’re excited to expand the range of applications in more cities later this year.

We will also be releasing a new blog series, which will go into the significant technical challenges we‘ve overcome when building the Vision Engine, SDK and API. Subscribe to updates by following the Scape Technologies Medium publication or by signing up to our newsletter here.

— Edward Miller
Co-founder & CEO, on behalf of the Scape Technologies team

If you are keen to learn more about the work we are doing at Scape or what you can do to partner with us, please, visit our website and get in touch.


Edward is co-founder & CEO of Scape Technologies, a computer vision startup in London, working to build a digital framework for the physical world.

Follow Edward and the company on Twitter here.


Interested to learn more about Scape Technologies?

We send a newsletter every couple of months, making sense of the AR industry and sharing our progress.

Sign Up to our newsletter.