My Name is Red: Understanding what colours tell us

Absolute fundamentals of remote sensing and understanding satellite imagery

Zeroing In
Zeroing In
8 min readSep 5, 2023

--

‘No, no, it’s not ISRO but a sister organization under the Department of Space’ I keep having to correct every person who asks me about my workplace. The next question is inevitable. ‘But you are a rocket scientist?’ Disappointing them yet again, I explain how my work deals with the data from sensors hosted in satellites launched by rockets. By this time, most folks are convinced that what I do is boring and discontinue the conversation (phew?).

But it’s natural, isn’t it? When you think of space technology, the first thing that comes to mind is rockets. The second would be viewing the stars and galaxies and understanding the cosmos. It’s understandable for a layperson to dismiss anything that isn’t this as technical jargon. But there’s so much more to it than is discussed. Through this series, I’d like to introduce the readers to one such concept that may sound technical to most — Remote Sensing. This is an attempt to demystify the jargon around it and bring up some remarkable things we can do with it — which might not be rocket science but are pretty cool.

It all begins with light. All light, we know, can be represented in the form of an electromagnetic wave. Light of different wavelengths (or frequencies/energies) occupies different positions on the electromagnetic spectrum. For example, we know that red light has a wavelength of around 700 nm and blue light around 400 nm and all colours humans perceive lie in this range. Beyond 700 nm is the infrared, followed by microwaves and radio waves. Below 400 nm is the ultraviolet domain, followed by X-rays and gamma rays (Yes, the stuff that made the Hulk).

Fig 1: Representation of the electromagnetic spectrum. Image source: Ade Stewart at https://wiki.travellerrpg.com/Electromagnetic_Spectrum

First, consider the visible part of the spectrum. We see a tree’s leaves as green because light with wavelengths associated with that green is reflected, and other wavelengths are absorbed. This is considering the visible light alone; if you had the superpower of observing the trees in other portions of the electromagnetic spectrum, you’d find that the tree is ‘near-infrared in colour.

Keeping this in mind, what do you think will happen if we place a plant in a dark room with a single source of (a) red light and (b) infrared light? (Take some time to think about it, a clue is given in the previous paragraph.) The plant will not survive when placed in a room with infrared light because the photosynthesis reaction requires energy, and that energy is predominantly derived from the red and blue portions of the spectrum. As mentioned in the last paragraph, we might see that tree is ‘near-infrared in colour’, so we can conclude that it doesn’t absorb that wavelength.

Similarly, all materials have specific energy absorption requirements, and other portions of the spectrum are reflected. We also use this principle in our day-to-day lives. We wear white and other light shades of clothes on hot sunny days. This is because these shades reflect most energy and are thus more comfortable.

Moving back to the example of vegetation, we can graphically represent the variations in its reflected energy with changes in wavelength. For parts of the electromagnetic spectrum where energy is absorbed, we will observe dips in the energy, and where most of the energy is reflected, there will be a peak. Take a look at the reflectance curve for vegetation below:

Fig 2: Graph depicting how vegetation reflects light of different wavelengths. Data for this plot is derived from the USGS spectral library (vegetation species: pine).

The peak at ~560 nm is the green wavelength reflectance peak of the visible range. You can see the increase in reflectance in the near-infrared (NIR) domain (>800 nm). Thus if our eyes were capable of detecting NIR, the trees would be NIR in colour.

Now that we see why plants are green, let’s try to understand how human eyes see colour. Our eyes have two types of cells: rods and cones. Cones are responsible for colour vision, whereas rods are useful in low-light conditions and predominantly depict targets in grayscale. Cones are further of three types: those sensitive to blue wavelengths, green wavelengths, and red wavelengths. The response of the cones to different wavelengths is represented in Figure 3. All colours in the visible part of the electromagnetic spectrum can be regarded as combinations of red, green and blue. In addition to these spectral colours, there are other colours humans can see, which involve the combination of red, green and blue in proportions not represented by the response of the cones. E.g. magenta is obtained by the combination of blue and red. There isn’t a significant overlap of the response of the red and blue cones, but a high value in both cones can lead to the colour magenta. Similarly, white requires a simultaneous high reading in red, green and blue cones. In addition, there are a few other colours, such as pink, where spectral colours are mixed with shades of grey, resulting in another set of non-spectral colours.

To recapitulate, objects absorb light of specific wavelengths and reflect light for others. Our eyes detect the reflected light. Based on the wavelengths of the reflected light and whether it has been detected by the different types of cones, our eyes can see colours.

Fig 3: Response of the red, green and blue cones of the human eye to wavelength of light. Image source: Coliban et al 2020. Remote Sensing

Aside/Fun Fact: Colour blindness occurs when there is a functional deficiency in certain types of cones. This can be difficult to imagine accurately. Take a look at this simulator to visualise the world differently: https://www.color-blindness.com/coblis-color-blindness-simulator/

We saw that the cones are colour detectors within our eyes. Digital cameras work on the same principle. Light reflected by the target enters the aperture, is focused by the lens and is detected by the detector cells. Similar to the cones in our eyes, the detector cells are sensitive to some portion of the electromagnetic spectrum. The detected light (energy) from the target point is converted to electrical impulses, which are then stored as numbers. Three detectors view a single point: one sensitive to blue wavelengths, one to green wavelengths and one to red wavelengths. Thus the target point is given by three values: blue, green and red. Remote sensing cameras aren’t too different from this. They may have three detector channels like our eyes: blue, green and red, or they might have single or multiple channels. Most importantly, the detectors need not be restricted to visible light and can detect light that our eyes cannot (for example, Infrared, X-ray and others).

With me so far? Now that we have discussed the basic concept that drives light detection, we will dive into how this is brought into use by imaging systems. Look at Figure 4. We start with the source of light, which is the Sun. The light from the Sun is incident upon the target, which is the Earth’s surface. The surface absorbs some energy and reflects the rest. This passes through the atmosphere and reaches the satellite sensor. Within the sensor, the light encounters a diffraction grating. This functions like a prism. The light is split into its constituent wavelengths. One detector cell receives one wavelength of light. Thus if a remote sensing sensor has 12 bands, there will be 12 detector cells for a single-point target. Now, this is for a point target. We have one line of detectors with a detector associated with each wavelength for a single point to be imaged. We will need an array of detectors to image a line where one line of detectors is associated with one wavelength. To image an area, we can either use an array of detectors associated with a particular wavelength (frame-based imaging) or rely on the motion of the satellite to image the subsequent lines (push broom line-based imaging). Thus, you see how one image is generated for each sensor wavelength band.

Fig 4: A visualisation of the functioning of any imaging system. The source of light is the sun. The target surface absorbs some light and reflects the rest. The sensor receives the reflected light and splits it into its constituent wavelengths. These are then detected by the detector cells.

The detector may be capable of capturing images of the surface in various wavelengths, but how do we make sense of this? The images are a collection of pixel values with one value associated with each wavelength band. We can visualise this information band by band or via band combinations. Consider first the band-by-band visualisation. Figure 5 shows how a set of values are given unique colours. This form of representation is called pseudocolour representation. If the unique colours are all shades of grey, from black for the lowest set of values to white for the highest, the representation is called greyscale representation.

Fig 5: Pixel values are represented visually by assigning colours to a range of values. A is a pseudo-colour representation, and B is a special case of A where the colours used for representation are shades of grey.

Band Combinations provide us with the means to visualise multiple bands at once. This is feasible by mimicking the performance of the cones in our eyes. Red, green and blue visualisation channels are combined to recreate the colours we can see, so in a sense, this becomes the task of making the invisible visible. The satellite wavelength bands are assigned a visualisation channel colour. For example, the satellite band associated with wavelength 460 nm assigned to the blue channel, 550 nm to the green channel, and 670 nm to the red channel provides an image close to what our eyes see. This is called the true colour composite. Any other combination is called a false colour composite. The wavelengths are assigned to channels based on the target spectral response. For example, we saw that vegetation depicts a sharp peak in the NIR. Thus setting the band associated with wavelength 900 nm to the red visualisation channel, 670 nm to the green channel, and 550 nm to the blue channel gives us a large value in the red visualisation channel with small values associated with green and blue visualisation channels. This results in a deep red colour for vegetation in this colour composite. Here’s an image of the Brahmaputra and a nearby Reserved forest in true and false colours.

Fig 6: A visualisation of Brahmaputra river and surrounding areas in true colour and false colour composites.

We have come to the end of the first part of the series. To summarise, we understood how humans see objects and colours. We understood how reflectance drives most imaging through the human eye or the detectors onboard satellites. We also saw how invisible light could be represented visually. In the next post, we’ll talk about other forms of geospatial data and how they can be combined to generate socially relevant products and why that is important to you as a reader. So, stay tuned as we add more colours to your spectrum!

TLDR: We can see what we cannot see.

Stay tuned for the future parts in the series — And God said, ‘Let there be light’ by Ritu Anilkumar.

This article was written by Ritu Anilkumar and edited by Atotmyr and Vagisha Bhatia.

--

--