Satellite Images analytics for humans
4 min readApr 15, 2022


TL;DR builds a platform which radically simplifies AI on satellite and drone images. It allows to train highly accurate object detection models with no-code and use them to geo-locate objects on full-size (100,000+ pixels) Cloud Optimized GeoTIFFs (COGs) in real-time. It’s even not necessary to upload files — can stream images directly from the cloud-storage or URL.

Object detection on 600 megapixel image

Satellite images analytics is broken

There are so many domains which benefit from access to global-scale data. Think of agriculture, national intelligence, environment protection, disaster control — not mentioning commercial applications and those which are yet to be discovered. Only one crucial point, nobody wants just images — they want relevant insights. Affordable, accurate and on time. And here’s when it gets complicated.

Analyzing satellite images was never an easy task —they are really huge, typically many-gigabytes and 100,000+ pixels along each dimension. Until recently it was even challenging to view them — you had to download images locally and use specialized software (like QGIS) to open them. Nowadays, thanks to Cloud Optimized GeoTIFF(COG) format, you can stream images and smoothly browse them from the comfort of your web-browser.

Inspecting images manually, however, can’t be scaled to many real-world challenges which require analyzing areas of thousands of square kilometers (for example monitoring crop health or detecting national security threats). It’s basically not possible / impractical for humans to do so. So, government bodies and enterprises are paying billions of dollars¹ to third party analytics providers to develop and deploy AI models to automate the process.

Wait, but why outsource something that critical? Well, here’s the thing,

AI on satellite is very different

Modern AI can offer quite a lot to automate tedious jobs. As studied in Spacenet and xView challenges, computer vision algorithms can achieve remarkably high level of accuracy in detecting objects and detecting changes over time.

Yet, it’s not easy to turn such R&D into software to solve real-world problems. There are challenges related to :

distortions induced by geographical projections
  • It’s not feasible to process multi-gigabyte images as a whole. In most of the cases they are just too big to be processed with tools developed for conventional images. Processing each image requires significant computational resources;
  • MLops is hard. But satellite images require even more attention due to their intrinsic geospatial complexity! Manage data, prepare it for training, track experiments, manage lifecycle of AI models — all that requires significant investments into infrastructure.

All of the above makes it challenging to build even a prototype which will work on real-world data. That’s why you can find thousands of articles and blog posts describing how to train AI to “detect {PUT_WHATEVER_HERE} with 99% accuracy”, but you won’t find a solution which does that. platform

We at work hard to remove the gap between R&D and real-world applications. We want to enable companies to focus on solving actual problems rather than on engineering and fighting with geospatial complexity. Satellite images are set to play a crucial role in solving problems like climate change and food security. And we want to allow to leverage them as trivially as possible.

Let’s first take a look at the example. How long it might take to geo-locate vehicles on a 600 megapixel 30 cm/pixel image covering ~83

10 seconds to detect and put ~5700 cars on the map.

Detection cars on 600 megapixel image

And that’s the end-to-end time from providing the URL to getting the final result. Normally just uploading an image from the local machine to the cloud will take longer than that!

Too good to be true? We encourage you to try for yourself at You can try this or this image if you don’t have a COG file on hand.

There is no trick here and it’s all about careful system design and usage of cloud-native formats such as COGs. Image has never left the cloud (AWS in our case) or copied to any intermediate storage before processing. Platform effectively streamed COG from the URL and processed it “on the fly”: in parallel, overlapping I/O, compute and using several GPUs. Re-projection and re-sampling is also done on the fly behind the scenes!

We embrace similar standards to achieve best level of user and developer experience across all other components of the platform — from data annotation to model training and deployment.

Stay tuned — in the coming posts we will share more details on features we are working on.

Want to get early access to all features or have comments / questions? Please drop us a message at

[1]: Earth Observation: Data & Services Market report by Euroconsult