Computer Vision for Busy Developers

Scale Space

Vinny DaSilva
9 min readAug 13, 2019

This article is part of a series introducing developers to Computer Vision. Check out other articles in this series.

Photo by Yin Yin Low on Unsplash

Understanding scale space is another one of those major building blocks in Computer Vision. Scale space allows for Scale Invariant feature detectors which allows us to find the same feature across images of the same objects at different scales.

While scale-dependent feature detectors (such as the Harris Corner Detector) will result in a list of features which are composed of an x and y values corresponding to their location in the image. If we want to work with scale invariant features, we need feature points which are not only composed of x and y values, but also defined by a third sigma value (often represented by Greek letter σ). While the x and y values gives us a clue to where the feature is within the coordinate space of the image, the sigma value will give us a clue as to where the feature is within the scale space of the image. In other words, features defined by x, y and σ will allow us to properly find the same feature in images taken at different scales.

Computer Vision algorithms must consider Scale In order to match the same features across different images where objects appear in different sizes. (Image Source: Pixabay )

Subsampling and Image Pyramids

--

--

Vinny DaSilva

Developer Relations Engineer at Google. Passionate about AR & VR. Previously at Lenovo ThinkReality, Samsung NEXT, Vuforia