Computer Vision for Busy developers

Describing Features

Vinny DaSilva
9 min readSep 24, 2019

This article is part of a series introducing developers to Computer Vision. Check out other articles in this series.

What’s a Descriptor, Anyway?

The reality is that extracting edges, corners or blobs from an image by itself isn’t as helpful as we would like. We just have points within an image — how can we start to understand the image a bit deeper? How do we go from having a bunch of points in an image to understanding that there’s an airplane or a person in the image? Feature Descriptors are algorithms that look at additional information around a Feature Point to better understand what it represents. This additional information is a calculated summary of the pixels immediately surrounding each feature point. With this additional information, we can more accurately identify that specific feature point across multiple frames of video, on stereo image pairs, panoramic image sequences, or any of the many implementations of Computer Vision.

Since this is an area I personally found very confusing when I started learning about CV, let’s recap the differences between a Feature Point and a Feature Descriptor. A Feature Point is a small area in an image (sometimes as small as a pixel) which has some measurable property of the image or object. A Feature Descriptor…

--

--

Vinny DaSilva

Developer Relations Engineer at Google. Passionate about AR & VR. Previously at Lenovo ThinkReality, Samsung NEXT, Vuforia