Color theory and Image Manipulation

Nikhil Singh
The Good Food Economy
8 min readJan 13, 2023

--

Introduction

With the advancement of cameras and the internet, the number of photos taken/shared has increased significantly. Mobile cameras also have come a long way when it comes to low light photography with a fixed aperture lens. But there are certain questions that come to mind: Why do the images taken from a camera look the way they are? Why there are different colors in an image? Let’s find out.

Color Theory

Before getting into “How images are made to look the way they look?”, let’s first look into the basics of “Why the colors in an image look the way they are ?“

Color theory was originally formulated in terms of three primary colors — red, yellow, and blue (RYB) — because these colors were believed capable of mixing all other colors.

German and English scientists established in the late 19th century that color perception is best described in terms of a different set of primary colors — red, green, and blue-violet(RGB) — modeled through the additive mixture of three monochromatic lights. Subsequent research showed these primary colors in the differing responses to light by three types of color receptors or cones in the retina.

Color theory is the body of practical guidance for color mixing and the visual effects of a specific color combination, color terminology based on the color wheel and its geometry separates colors into primary, secondary, and tertiary colors.

  1. Primary colors — Red, yellow and blue
    In traditional color theory (used in paint and pigments), primary colors are the 3 pigment colors that cannot be mixed or formed by any combination of other colors. All other colors are derived from these 3 hues.
  2. Secondary colors — Green, orange, and purple
    These are the colors formed by mixing the primary colors.
  3. Tertiary colors — Yellow-orange, red-orange, red-purple, blue-purple, blue-green & yellow-green
    These are the colors formed by mixing a primary and a secondary color. That’s why the hue is a two-word name, such as blue-green, red-violet, and yellow-orange.
Fig 1: Color Wheel source

How colors are perceived by our eyes?

Fig 2: Human eye source

Humans have three kinds of color receptor cells — or “cones” — in their eyes. Each type of cone contains a different visual pigment. These three cone types are called red, green, and blue.

All hues can be produced by mixing red, green, and blue light. This is how a color television set works; a mixture of these three wavelengths of color produces several million visible colors.

Color Classification

  1. Warm and cold colors — Warm colors are associated with daylight or sunset, and Cool colors are associated with a gray or overcast day. Warm colors are often said to be hues from red through yellow, browns. Cool colors are often said to be hues from blue-green through blue-violet, most grays included.
  2. Subtractive and additive colors — Additive color model describes how light produces color. The additive colors are red, green, and blue, or RGB. Additive color starts with black and adds red, green, and blue light to produce the visible spectrum of colors. As more color is added, the result is lighter. When all three colors are combined equally, the result is white light. Subtractive color model, pigment is used to produce color using reflected light. Subtractive color begins with white (paper) and ends with black. As the color is added, the result is darker. Printers use cyan, magenta, and yellow inks in various percentages to control the amount of red, green, and blue light reflected from white paper.

Color spaces

A color space is a specific organization of colors. In combination with color profiling supported by various physical devices. There are many different color spaces used in computers, printers, televisions, etc. Some of them are mentioned below:

  1. RGB(Red Green Blue) — It uses additive color mixing because it describes what kind of light needs to be emitted to produce a given color. RGB stores individual values for red, green, and blue. Common color spaces based on the RGB model include sRGB, Adobe RGB, etc.
  2. CMYK(Cyan Magenta Yellow Key) — It uses subtractive color mixing used in the printing process because it describes what kind of inks need to be applied so the light reflected from the substrate and through the inks produces a given color. There are many CMYK color spaces for different sets of inks, substrates, and press characteristics which change the dot gain or transfer function for each ink and thus change the appearance.
  3. YPbPr(Luminescence Blue Red) — It is a scaled version of YUV. It is most commonly seen in its digital form, YCbCr(Luminescence Chromium blue Chromium red), used widely in video and image compression schemes such as MPEG and JPEG.
  4. HSV (hue, saturation, value) — It is also known as HSB (hue, saturation, brightness) and is often used by artists because it is often more natural to think about a color in terms of hue and saturation than in terms of additive or subtractive color components.
  5. HSL (hue, saturation, lightness/luminance)- It is also known as HLS or HSI (hue, saturation, intensity) and is quite similar to HSV, with “lightness” replacing “brightness”.

Now that we know how a color looks the way it is now let’s use the information to do some image manipulation but before that let’s look into some of the terms related to images and manipulate them simultaneously

Let’s rock now!

Brightness and Contrast adjustment

Two commonly used point processes are multiplication and addition with a constant:

g(x) = a*f(x)+b

Here a is the brightness factor and b is the contrast factor.

def brightness_contrast(img , alpha , beta):
new_image = cv2.convertScaleAbs(img, alpha=alpha, beta=beta)
return new_image
Orignal Image
a = 1.5 , b = 50
a = 1.7 , b = 20

Now here we have performed a linear transformation to adjust the brightness and contrast now let’s try a non-linear transform on the image.

Gamma transform

Gamma transform or gamma correction can be used to correct the brightness of an image by using a non-linear transformation between the input values and the mapped output values:

O=(I/255)^γ×255

Fig 2: Different values of the gamma source

When γ<1, the original dark regions will be brighter and the histogram will be shifted to the right whereas it will be the opposite with γ>1.

def adjust_gamma(image, gamma=1.0):

invGamma = 1.0 / gamma
table = np.array([((i / 255.0) ** invGamma) * 255
for i in np.arange(0, 256)]).astype("uint8")
return cv2.LUT(image, table)
Orignal Image
γ = 0.2
γ = 0.5
γ = 2.0

Cool and Warm filters

For obtaining warm images we are going to increase the values of the red channel and decrease the values of the blue channel for all the pixels in the image. For obtaining cold images, we are going to do the opposite: increase the values for the blue channel and decrease the values for the red channel. The green channel removes untouched in both cases.

from scipy.interpolate import UnivariateSpline
def spreadLookupTable(x, y):
spline = UnivariateSpline(x, y)
return spline(range(256))
def warmimage(image):
increaseLookupTable = spreadLookupTable([0, 64, 128, 256], [0, 80, 160, 256])
decreaseLookupTable = spreadLookupTable([0, 64, 128, 256], [0, 50, 100, 256])
red_channel, green_channel, blue_channel = cv2.split(image)
red_channel = cv2.LUT(red_channel, increaseLookupTable).astype(np.uint8)
blue_channel = cv2.LUT(blue_channel, decreaseLookupTable).astype(np.uint8)
return cv2.merge((red_channel, green_channel, blue_channel))
def coldimage(image):
increaseLookupTable = spreadLookupTable([0, 64, 128, 256], [0, 80, 160, 256])
decreaseLookupTable = spreadLookupTable([0, 64, 128, 256], [0, 50, 100, 256])
red_channel, green_channel, blue_channel = cv2.split(image)
red_channel = cv2.LUT(red_channel, decreaseLookupTable).astype(np.uint8)
blue_channel = cv2.LUT(blue_channel, increaseLookupTable).astype(np.uint8)
return cv2.merge((red_channel, green_channel, blue_channel))
Original Image
Cool Image
Warm Image

Emboss Effect

Emboss effect is similar to edge extraction with a 3d effect. The traditional use for Emboss is to make something look more three-dimensional by adding highlights and shadows to different parts of your layer.

def emboss(image):
kernel = np.array([[0,-1,-1],
[1,0,-1],
[1,1,0]])
return cv2.filter2D(image, -1, kernel)
Original Image
Image with Emboss Effect

Negative Effect

To implement a negative effect, all we have to do is basically invert the pixel values. This can be done by subtracting the pixel values by 255. In python, we can use the cv2.bitwise_not() function for this purpose.

def invert(img):
inv = cv2.bitwise_not(img)
return inv
Original Image
Negative Effect

Pencil Sketch Effect

Now, this is the effect in which we want the edges to be detected properly and at the same time we want the shades to be done in the same way as it has been done with a pencil. So this requires the following steps:

  1. Convert the image to grayscale
  2. Invert(Negative) the image
  3. Blurr the inverted image
  4. Invert the blurred image
  5. Perform bit-wise division between the grayscale image and the inverted-blurred image.
def sketch(photo, k_size):
grey_img=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
invert_img=cv2.bitwise_not(grey_img)
blur_img=cv2.GaussianBlur(invert_img, (k_size,k_size),0)
invblur_img=cv2.bitwise_not(blur_img)
sketch_img=cv2.divide(grey_img,invblur_img, scale=256.0)
return sketch_img
Original Image
Pencil Sketch Effect

We can create many more effects by using some of the image operations. Here I am able to create a few with just some tricks with image values. It’s fun to experiment with images but to explore more check the references.

References

  1. https://www.colormatters.com/color-and-design/basic-color-theory
  2. https://en.wikipedia.org/wiki/Color_theory
  3. https://towardsdatascience.com/python-opencv-building-instagram-like-image-filters-5c482c1c5079
  4. https://opencv.org

--

--