What are some Computer Vision Applications ?

Abhijeet Pokhriyal
Analytics Vidhya
Published in
7 min readOct 26, 2020


Focus on Remote Sensing and Satellite images.

Computer vision is an interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to understand and automate tasks that the human visual system can do.

So much for jargon, what really is computer vision ?

Well you might have come across advertisements like this one

Ad Source

It claims there is 24% increase in skin moisture , 13% reduction in microwrinkles (what ever they might be) and 15 % increase in skin elasticity.

Ever wondered how they come up with these numbers ? You would be not be surprised now if I say computer vision. Well not quite, it’s Machine vision. Machine vision comprises of systems built for very specific tasks. Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry.

Suppose you wanted to compare how your Hair dye performs with your competition’s. Put hair dyed with each of them under a microscope.

Figure 9 depicts two hairs at 40X magnification. The lighter hair is blond, while the darker hair is brown. The brown strand is slightly thicker than the blond hair: the blond hair is about 54 microns across, and the brown hair is about 62 microns across. SOURCE

A common output from automatic inspection systems is pass/fail decisions.[13] These decisions may in turn trigger mechanisms that reject failed items or sound an alarm.

Though machine vision has been in use for a while now, this leads quite nicely to computer vision applications like this one, which is a deep learning-based live video hair tracking and hair color simulation.

This article’s main focus though is Remote sensing and Satellite images.

Application of Computer Vision in Remote Sensing

Consider an unlikely problem: finding the poor. Even in a world riddled with poverty, nearly every government, nonprofit and aid agency struggles with this issue.

Proxy for finding the poor can be basic economic variables like gross domestic product.

But the problem is that these numbers can be unreliable in countries where the statistical infrastructure is weak, the informal businesses do not want to be tracked and the numbers may be manipulated.[DEATON & HESTON]

It is very difficult to randomly sample people in the rural areas of Bihar in India or in a slum like Kibera in Nairobi, Kenya, where even just mapping the streets is its own project.[NYT]

In most countries GDP numbers are not available on any consistent basis at the subnational level. Much of the interesting variation in economic growth takes place within, rather than between, countries and that is what can help us with our original problem of poverty.[HENDERSON]

Computer vision can help by using Satellite images with Nighttime luminosity. This tells us not just about electrification but also about economic activity more broadly, and statistical work shows it reliably correlates with economic performance. [HENDERSON]

Value provided by using Satellite data instead of extensive surveying is that we can use publicly available imagery to infer both spatial and temporal differences in local-level economic well-being specially for countries where reliable survey data do not yet exist and where survey-based interpolation methods might struggle to generate accurate estimates.

Pain points that this approach can address are

  1. Expense — Land surveys are sampled surveys and involve extensive investments of time and money. On the other hand satellites can provide more up-to-date information and a way faster pace

2. Limited repeated observation — these surveys have limited repeated observation of individual locations, making it difficult to measure local changes in well-being over time, and public release of any disaggregated consumption data from is very rare.[Yeh, C., Perez]

Satellite Image datasets are made available by governmental organizations and one such dataset is the night time light dataset.

DMPS provides cloud-free composites made using all the available archived DMSP-OLS smooth resolution data for calendar years 1994–2013.

To do an exploratory analysis we pull two files one for 1994 and one for 2013. This will allow for comparitive analysis.

  • We have a downloader script available that can download and untar the the files so that we can then proceed with reading in the images.
  • The files are large in size with avg size being around 300MB

Let’s explore the dataset using Python. You can follow the code here

After loading the image for the year 2013 we use matplotlib to show it.

We see a lot of ‘haziness’ in the image. We can get a sense of the spread of the image intensity values using the quantile method.

We see that most of the values in the image are less than 5. Since the image is mostly black/dark this is reasonable and therefore we can do quick thresholding to reveal the night light intensities that we are interested in. We pick a number larger than 5 , 10 in the case below.

Now the night light intensities are better visible and the haziness goes away.


Before we begin aggregating the nightlights, we should take a look at the image size to get an estimate on the performance of any aggregation.

We see that the image is very high dimensional and therefore we should rely on numpy broadcasting when doing any aggregation as that will allow for parallel execution of the operations.

To perform an aggregation, we divide the image in 100x100 blocks and for each patch we first threshold ( > 10) and then calculate the sum of the pixel intensities.

In the image above we can see that the brightly lit areas turned yellow and the darker regions are black, while green/blue represents in between intensities.

One thing to notice for our analysis is that certain parts of the world (land) like saharan africa , amazon forest , australian wasteland are almost as dark as the oceans , indicating little to no human activities.

India from Space at night time!

Let’s go a step further and extract a particular country — India. Since this is only an exploratory analysis, we can use crude numpy accessors to focus in. Looking at the image axes above we can see that roughly the region translates to 4000 to 9000 in the vertical direction and 28000 to 35000 in the horizontal direction.

Let’s extract indian subcontinent in both the 2013 and the 1994 images.

The right image above is from 1994 and the left one from 2013. We can clearly see that in 20 years the region has gotten brighter specially in regions which were further away from the brighter spots of 1994.

To get a better sense of the change let’s look at the spread of the intensities.

We see that for the year 2013 compared to 1994 there are more pixels with higher intensity values.

To further understand the change let’s zoom in a bit further to Mumbai metropolitan area. Using the same logic as above we can index into the numpy array and extract the 2013 and 1994 images.

Mumbai from Space at night time!

Visually we can see the difference but to quantify it let’s look at the quantiles of the pixel intensities

We can see that the 75%tile values have almost doubled in 20 years! And this is without any sort of thresholding. We can try different thresholded values to extract clusters from the region. But before we do that we can also apply an image filter to sharpen the image a bit. This should help magnify the intensity values.

As you can see that Satellite images have huge potential in measure changes in economic activity of a nation and for tracking progress it has made over the years. And not just of nations but also of growing metropolitan areas.

I will probably follow up this article with more geo-processing in python and ultimately building prediction models using these images.

Follow on Linkedin!


  1. [Yeh, C., Perez]https://www.nature.com/articles/s41467-020-16185-w
  2. Yeh, C., Perez, A., Driscoll, A. et al. Using publicly available satellite imagery and deep learning to understand economic well-being in Africa. Nat Commun 11, 2583 (2020). https://doi.org/10.1038/s41467-020-16185-w
  3. [NYT](https://www.nytimes.com/2016/04/03/upshot/satellite-images-can-pinpoint-poverty-where-surveys-cant.html)
  4. [HENDERSON](https://www.aeaweb.org/articles?id=10.1257/aer.102.2.994)
  5. [DEATON & HESTON](https://pubs-aeaweb-org.librarylink.uncc.edu/doi/pdfplus/10.1257%2Fmac.2.4.1)
  6. [WORLD_BANK](https://blogs.worldbank.org/sustainablecities/tracking-light-space-innovative-ways-measure-economic-development)
  7. [NASA](https://earthdata.nasa.gov/learn/sensing-our-planet/prosperity-shining)

8. [IMF](https://www.imf.org/external/pubs/ft/fandd/2019/09/satellite-images-at-night-and-economic-growth-yao.htm)



Abhijeet Pokhriyal
Analytics Vidhya

School of Data Science @ University of North Carolina — Charlotte