Color Reproductions of Hyperspectral Images

Chandler Abraham
Color and Imaging
Published in
12 min readNov 28, 2016

This blog post is about making accurate color renderings of hyperspectral images. An exercise that serves as a relatively simple and practical introduction to accurate color reproduction.

Sections:

  1. Color Reproduction

2. Hyperspectral Images

3. The BearFruitGray dataset

4. Colorimetric Equations

5. Determining the Illuminant

6. sRGB

7. Verifying the Results

8. Resources and Acknowledgements

Color Reproduction

The goal of color reproduction is to create a medium (like a print or digital display) that, when viewed, gives rise to a color sensation similar to what a person would have experienced if they had viewed the scene with their own eyes. This is sometimes referred to as “faithful reproduction”.

A core challenge of color reproduction is that the environment you view color images in is almost never the same as the environment that the image was captured in. Human color vision is highly impacted by viewing conditions, so without compensating for the difference in viewing conditions from the capture to the reproduction you have no hope of having accurate colors. For example a printed photo of a sunny day viewed indoors is orders of magnitude less bright than the sunny day, the maximum dynamic range is much lower, and the indoor light is likely a different color than the daylight was.

These physical differences between real world scenes and color media make determining the criteria for the most faithful reproduction difficult. The most faithful reproduction in the eye of the user may even differ from their objective color sensations, due to differences in how colors are recalled in memories [1].

Hunt and Fairchild have proposed several different types of color reproduction goals:

colorimetric color reproduction: A reproduction that matches the original scene only when it is viewed in identical lighting conditions to the original scene. This is achieved through CIE Colorimetry, a topic I have blogged about in detail. Colorimetry is the math of color matching in terms of the human cones cells, but it doesn’t model other higher level aspects of color perception.

color appearance reproduction: This type of reproduction builds on colorimetric reproduction to include compensation for viewing conditions.

To put it very briefly, the human brain is always adapting to the color of light so that it can make white objects appear white no matter if the scene light is yellow or blue; this is called chromatic adaptation. If you know the color of light your reproduction will be viewed in, you can predict the viewer’s chromatic adaptation state and adjust your colors so that they appear correctly to a viewer in that state.

In this diagram, the colors have been chromatically adapted for the lighting environment they are displayed in. The color sensation in the eye should appear the same in both cases. If these photos were viewed under identical conditions the colors in the photos would no longer match.

This makes it clear that color appearance matching necessarily requires some idea of what conditions the reproduction will be viewed in. State of the art appearance matching takes into account many more parameters that affect color vision, but chromatic adaptation is the most important.

color preference reproduction: Makers of consumer imaging devices have discovered that the colors people find most pleasing are not necessarily the most accurate reproduction of the appearance of the scene. Once you have implemented appearance matching, you can build on that to implement a color preference scheme. Your smartphone camera almost certainly targets color preference.

Hyperspectral Images

A hyperspectral image is an image that contains much more spectral information about a scene than is captured by standard digital cameras.

Generally, a color image is represented as a two dimensional arrays of pixels, where each pixel has three values. The height and width are the spatial dimensions and the three pixel values are the color dimensions. Generally these represent primaries in an RGB colorspace.

A color image in this form contains enough information to reproduce the colors of the original scene but not enough to know the detailed spectral power distribution of light at each pixel in the scene. Standard color cameras, as well as the human eye, both compress the light spectrum entering them into the three value encoding that represents the color of the light, but loses information about the spectrum.

This compression is lossy and cannot be reversed. A variety of distinct spectrum can compress to the same color encoding.

Instead of three color values, the pixels of hyperspectral images contain measurements of power across a spectrum at some increment. If the spectrum measured overlaps with the spectrum of human vision, then a spectral image can be rendered into an accurate color image.

A seven band multispectral image and a three band color image:

It’s also important to be able to think about this imagery at a hyperpixel level (I’m making that a word) because our color rendering algorithms operate on one hyperpixel at a time.

A hyperspectral pixel is usually an array of radiance measurements for a specific spatial location in the image. Here’s an single 31 band hyperspectral pixel plotted, revealing an approximate spectral power distribution.

Radiance is an absolute measurement of power in light, Watts * meter³ * Steradian.

The BearFruitGray dataset

The hyperspectral dataset I’ve chosen to work with was created in 1997 by a team of color scientists at the University of Pennsylvania. It contains four hyperspectral images of the same scene, each one captured with a different light source. Crucially, some of the subjects in the images are color calibration objects, which will be very useful for verifying our color image.

The image notes on the BearFruitGray setup are worth reading if you’re interested in this kind of stuff.

The hyperspectral images have a measure of power at 10nm increments across the visual spectrum, 31 measurements in all. They can can be thought of as a 2000 pixel x 2000 pixel x 31 band cubes. Two dimensions are spatial and one is spectral.

Like in the first graphic, we can visualize one of the BearFruitGray images by rendering each band as its own grayscale image. This highlights how objects in the scene have different reflectance properties, the fruits are dark at the short end of the spectrum (400nm) and very bright at the long end of the spectrum (700nm).

Colorimetric Equations

The first question we need to ask ourselves is how would the cones of the human eye respond to the spectrum of each hyperpixel. Luckily there’s a well established mathematical model for this, the XYZ color matching functions. These are not the same as the sensitivity functions of the human LMS cones, but they are a linear transformation of them. We could use the LMS cone sensitivities directly but most colorimetric equations are stated in terms of XYZ.

The important take away of this section is that for each hyperpixel we need to calculate an X, Y, and Z value that together represent not a color, but how the human cones would respond to light. Which is the largest factor in determining color.

The spectrum to XYZ equation that we must apply to each of our hyperpixels is:

λ is wavelength

Σ is a summation from λ=380nm to λ=760nm at 5nm increments.

R(λ) is the reflectance of an object at λ, as a percent 0–1.

E(λ) is the energy of the illuminant at λ, as radiance (watt*m²*steradian)

x(λ), y(λ), z(λ) are the XYZ color matching functions.

K is the radiance of a hypothetical perfectly reflecting object in the scene, this is calculated by assuming a reflectance of 100%. K is critical, the other XYZ values are divided by K to normalize them from having an absolute unit like radiance to bring them down to a ratio of 0–1.

Multiplying XYZ by 100 to bring it into the 0–100 range is just a convention.

The entire spectral curve present in each hyperpixel is integrated against each x(λ), y(λ) and z(λ) color matching function to produce three 0–100 values, X,Y,Z.

You might be wondering how the E(λ) and R(λ) in our equation relate to our hyperspectral image pixels. The answer is when you measure radiance in a camera you are measuring light (E), reflecting off objects (R). They are already combined. Each pixel(λ) == E(λ)*R(λ).

If you’re like me, this might be easier to read in python.

Here’s what one hyperspectral pixel looks like one more time for good measure.

Determining the Illuminant

One complication is that the K equation calls for the illuminant power, E(λ), on its own. We just stated that E(λ)*R(λ) were already multiplied in our pixels so we need a way to find E(λ) on its own. In a non-calibrated setting this would be hard to determine but luckily the BearFruitGray image contains a white diffuser of known reflectance.

The white card marked by R is Munsell N 9.5/ paper. The paper is not perfectly reflective but if you multiply one of the hyperpixels located on this card by the constant 1.12 it will produce the illuminant spectrum as if you had measured the light source directly. This is your E(λ). You may want to average many pixels from the R card to reduce noise.

Their are four versions of the image which were each taken under illuminants. The different illuminant spectra can be seen here as measured off the R card of each image.

At this point there’s enough info to write a program to loop over every hyperpixel in one of the images and apply the XYZ equations, resulting in a 2000 x 2000 array of XYZ values. As of right now this is a non-trivial exercise for the reader, but I do plan on open sourcing some of my code eventually.

sRGB

A curious thing about the documentation for BearFruitGray is that they mention needing to know the RGB phosphors of your CRT monitor in order to render it in color. It took me a second to realize that when this dataset was created 20 years ago the sRGB color space had just been proposed. Each rendered image had to be targeted to a specific display for accurate color reproduction.

How do we render our image if we have no way of knowing the lighting conditions and therefore the chromatic adaptation state of the people we want to view our image? Luckily you don’t have to call up everyone you want to show this image to and ask them about their display primaries.

The answer is that their color display devices or color printers are going to apply their own chromatic adaptation models (or potentially more advanced models) to make the colors appear correctly for the conditions they are configured for. Our job is to give those devices information about our source illuminant so that they can properly transform our colors.

We can’t just hand someone our XYZs, we have to tell them what illuminant they are relative to.

For color images on the web this is most commonly accomplished with the sRGB color space. Something I didn’t know about sRGB for the longest time is that it is a hard requirement that sRGB colors be relative to illuminant D65, a CIE standard illuminant that approximates daylight.

This requirement means that even though an sRGB image file doesn’t have any illuminant information, you always know it’s D65. A device hoping to display an sRGB color doesn’t need to know about our BearFruitGray illuminant because we already did the work to normalize our colors to D65, now that’s the only source illuminant they care about.

The pink steps are what we need to do, the black is what some viewer’s display device needs to do and sRGB, with its fixed illuminant, is the common language between those steps.

In ICC color management terms, sRGB is filling the role of a “Profile Connection Space”.

The specific math behind the different chromatic adaptation models is something I haven’t looked into enough to write about and is probably deserving of its own post. So I’ll just say that the most popular model seems to be the Von Kries Transform. An implementation of Von Kries is available in the colour-science python library.

After all this talk, here are the technical steps to convert our pile of XYZs to sRGB

  • Scaling of our Y= 100 XYZs to Y=1, as called for by the sRGB spec.
  • chromatic adaptation of the XYZ values from BearFruitGray illuminant to D65 using Von Kries Transform.
  • linear transformation of adapted XYZ to sRGB
  • sRGB “gamma” function applied to make luminosity non-linear, here’s a great detailed explanation of gamma.
  • Upscaling of our 0–1 sRGB values to 0–255 for 8 bit representation.

Here’s an pseudo implementation in Python

Results

Using the steps outlined in Colorimetric Equations and the sRGB section, I rendered 3 of the test images and was pretty excited to find that the results look plausibly accurate.

Since the images contains calibration targets, we can compare the patches in our renderings against the known values of a Macbeth Color Checker (MCC), the 24 patch board in the scene. The MCC patch XYZ values are only known under the D50 Illuminant so for this comparison I’ve taken a rendered sRGB image and converted it into XYZs relative to D50 using the Von Kries model.

Seen here are the chromaticity coordinates of the one of our rendered images color checkers (white circles) and the known color checker values (gray triangles) plotted on the 1976 Uniform Color Space Chromaticity Diagram.

This diagram is useful because it is somewhat perceptually uniform, so the distance between the markers is a consistent measure of perceptual color difference across the entire chart.

I was curious how impactful the chromatic adaptation step was, so I rendered three of the test images with and without it.

Top row: 3 different hyperspectral images of the same scene, with different illuminants (named blue, yellow, red), accurately rendered into sRGB with a chromatic adaptation.

Bottom row: the same 3 images incorrectly converted to sRGB without the required chromatic adaptation to D65.

Of the three illuminations in the dataset, the blue light seems to have been fairly close to D65 so the chromatic adaptation step did not drastically improve the color reproduction. It’s clear though with Yellow and Red that without the chromatic adaptation the colors are completely wrong.

When plotted in the UCS diagram, the color difference between the Red adapted and unadapted image is as equally dramatic.

Left: adapted red image, Right: unadapted red image

Using calibrated input for development is important because real world hyperspectral images lack targets of known reflectance and it’s very difficult to asses the quality of your rendering.

The close alignment of the my rendered color checker patches with the known color checker values seems to generally validate the approach I’ve taken. This is good enough for right now. I now have a framework to experiment with different color appearance models to see if I can reduce the rendered vs known color checker differences.

Resources

Shout out to David Brainard and his students for the hyperspectral dataset and associated research.

If you want to play with hyperspectral imagery more, a treasure trove of test image datasets to play with, both color and hyperspectral, can be found at: http://visionscience.com/vsImages.html

Useful Reading

Papers

--

--

Chandler Abraham
Color and Imaging

Idaho not Iowa. Formerly @twitter, now I do space stuff. #RWEN