Segment Anything with Mapflow
In April 2023 Meta AI introduced the open model which was claimed to be foundational to identifying any objects in any image. While the large language models are considered to be game changers in generative AI, the SAM is here to present a base model for doing a good number of applications of visual object recognition with real-time prompting.
If you already tried the demo you might be impressed by how fancy it works with visual prompts to a single image. However, it was unclear how effective it could be if applied to the geospatial images in comparison to the object-wise models. The specifics of geospatial imagery is that the objects are relatively small while the image dimensions can be wide and it can lead to a number of non-relevant / missing objects in this zero-shot prompting. However, in the following days, the number of tools to ease the processing of spatial images with SAM has been raised showing the great interest of the geospatial community in the foundational segmentation model (see some references below).
While studying and being inspired by some of these open-source repos we found out that it’s worth implementing SAM into our platform while adjusting it to the geospatial imagery processing workflows on a large scale.
Today we introduce the public beta release of the “Segment Anything Model” featured in Mapflow.
The “Segment Anything” is available as yet another model in the list of AI-mapping models featured in the Mapflow dashboard. There are the same steps required to launch this new one:
- Select your data source
- Select your geographical area — either polygon, GeoJSON file, or your image extent
Note that there is a difference in the model workflow:
- If you run this model using a GeoTIFF file — the original resolution of the input image will be used
- If you run it via TMS (e.g. Imagery providers like Mapbox Satellite) — you need to select the Zoom level (image resolution) from the model options which will be used for the input
Depending on the input resolution, the SAM model will interpret it and generate different objects. It can be empirically classified by the zoom levels as follows.
ZOOM LEVEL 14
SEMANTIC OBJECTS: Land use, forests, parks, fields, bodies of water
ZOOM LEVEL 16
SEMANTIC OBJECTS: Small fields, large buildings, lawns, plots
ZOOM LEVEL 18
SEMANTIC OBJECTS: Farms, buildings, groups of trees, etc.
ZOOM LEVEL: Aero
SEMANTIC OBJECTS: Houses, single trees, vehicles, roof structures, etc.
❗️Note that the processing time is increasing exponentially by zoom+ level. We keep the minimum cost in credits for trying this model while it’s in the experimental mode so it’s available for everyone with the free limit.
❗️SAM is not provided in Mapfow <> QGIS list of default models, as the zoom options are not enabled in the current plugin’s design. Yet if you work in QGIS and want to try SAM there — send us a request and we will connect corresponding workflow scenarios with all zoom options specified.
Don’t hesitate to contact us if you need our help, and of course, we greatly appreciate your feedback ⭐⭐️⭐️️:
- What kind of cases non covered with the Mapflow object-wise models you might be looking for — is the SAM model relevant enough to cover them?
- Some of the UX capabilities are missing in this beta release, yet they are being explored and we are working on the further implementation. Don’t hesitate to share with us what kind of prompting tools you might have imagined interacting with the SAM for targeting object-wise segmentation (click on the image, text input, visual samples, automatically select zoom depending on the object specified, smth else?)
- Share with us your samples if you manage to obtain some nice (or not-so-nice) results with SAM by rating your results. We review all your ratings and get back with promo codes and support to thank you for it
Some SAM references:
- Segment Anything Demo by Meta AI
- Most popular open repo with examples and cases of SAM in application to geospatial imagery (sam-geospatial)
- Mapflow AI — try for free