Image Manipulation Techniques (C++)

Brightness factor of 0
Brightness factor of 0.5
Brightness factor of 1.0 (Original Picture)
Brightness factor of 1.5
Brightness factor of 2.0

Changing brightness is achieved fairly simply by multiplying the factor of brightness to each pixel’s RGB channels.

Contrast factor of -0.5
Contrast factor of 0
Contrast factor of 0.5
Contrast factor of 1.0 (Original)
Contrast factor of 1.5

Changing contrast is done by using the linear interpolation formula to interpolate/extrapolate with the picture’s average luminance.

Saturation of -1.0
Saturation of 0.0
Saturation of 0.5
Saturation of 1.0
Saturation of 2.0

Calculating saturation changes is very similar to changing contrast, except here we’re not interpolating with a grayscale image, but instead using the luminance of each pixel to calculate that pixel’s new color (again using the interpolation formula).

Gamma factor of 0.1
Gamma factor of 0.5
Gamma factor of 2.0

Gamma is calculated by multiplying the color of each pixel to the power of 1/gamma. This formula requires the RGB intensities to be scaled to a value between 0 and 1, so I do this and then scale it back since the base code’s set-pixel-color functions use RGB values from 0 to 255.

Base picture
Cropped picture of size 50x50 starting at pixel (70,40)

Cropping is done by transferring the selected region of pixels to a new image object. This is done by looping through the pixels of the blank result image and adjusting indices to match the correct location of the original.

Quantization with 1 bit
Quantization with 2 bits
Quantization with 3bits
Quantization with 4bits
Random Dithering (1 bit)
Random Dithering (2 bits)
Random Dithering (3 bits)
Random Dithering (4 bits)
Floyd-Steinberg Dither (1 bit)
Floyd-Steinberg Dither (2 bits)
Floyd-Steinberg Dither (3 bits)
Floyd-Steinberg Dither (4 bits)

All three quantization methods were implemented based on the writeup. Standard quantization basically uses a step function to split the RGB values(0–255) into 2^nbits steps. This is clearly seen with nbits = 1, where each pixel is either very bright or very dark, with no variation in between.

Random dithering tweaks this process by adding noise to the picture. This is done by introducing a random value to each RGB channel before it’s quantized. In this case, I used a pseudo-random integer between 0 and 100 that is then scaled to a value between -0.5 and 0.5.

Floyd-Steinberg takes the difference, or error, between the quantized pixel and the base pixel and distributes it to neighboring pixels using the defined filter. For this case, I treated the image as a torus, wrapping top to bottom, left to right, and consequently the edges of the images above, especially in the 1-bit case, are a bit off.

Blur factor of 3
Blur factor of 7
Blur factor of 13

Blur was implemented using the default, non-optimized method. I did not convert my filter to integer arithmetic here (though I did for later methods) because it seemed simpler to just sum the new color intensity in the same loop as the weight calculations.

For the edge cases here, it seemed like unnecessary work to consider the image as a torus, so I used min and max to bound the index within the image. To compensate for normalization, the weights of those out-of-bound pixels are essentially added to the weight of the center pixel.


Sharpen was nearly identical to blur, except n is basically hard-set to 3 and the weights are hard-set to the predefined filter.

Edge detection with a threshold of 150

Likewise, edge detection was done using the same base algorithm, with n also set to 3. Here though, I used a separate Image object to store the gradient, which is summed using each pixel’s luminance and the given equation and weights.

Scale to size of 800x800 (Point filter)

Scale was implemented by sending the calculated u and v to the sample function. For the point filter above, this was done by simply converting the given u and v doubles to int and returning that pixel.

Hat function 800x800

Hat’s filter was done by acquiring the floor and ceiling of the given u and v, and then calculating the weights of each based on the hat function. The weights for u and v are then multiplied at each pixel in the box.

Mitchell 800x800
Mitchel 300x300

Similar to hat, except here there are separate weights given to the 3x3 box around the center pixel and the 5x5 ring outside of the box.

The mitchell and hat implementations are slightly off; the weights are all calculated but I didn’t have time to add the farthest-off pixels in the box to the summation.