Image Manipulation Techniques (C++)
Changing brightness is achieved fairly simply by multiplying the factor of brightness to each pixel’s RGB channels.
Changing contrast is done by using the linear interpolation formula to interpolate/extrapolate with the picture’s average luminance.
Calculating saturation changes is very similar to changing contrast, except here we’re not interpolating with a grayscale image, but instead using the luminance of each pixel to calculate that pixel’s new color (again using the interpolation formula).
Gamma is calculated by multiplying the color of each pixel to the power of 1/gamma. This formula requires the RGB intensities to be scaled to a value between 0 and 1, so I do this and then scale it back since the base code’s set-pixel-color functions use RGB values from 0 to 255.
Cropping is done by transferring the selected region of pixels to a new image object. This is done by looping through the pixels of the blank result image and adjusting indices to match the correct location of the original.
All three quantization methods were implemented based on the writeup. Standard quantization basically uses a step function to split the RGB values(0–255) into 2^nbits steps. This is clearly seen with nbits = 1, where each pixel is either very bright or very dark, with no variation in between.
Random dithering tweaks this process by adding noise to the picture. This is done by introducing a random value to each RGB channel before it’s quantized. In this case, I used a pseudo-random integer between 0 and 100 that is then scaled to a value between -0.5 and 0.5.
Floyd-Steinberg takes the difference, or error, between the quantized pixel and the base pixel and distributes it to neighboring pixels using the defined filter. For this case, I treated the image as a torus, wrapping top to bottom, left to right, and consequently the edges of the images above, especially in the 1-bit case, are a bit off.
Blur was implemented using the default, non-optimized method. I did not convert my filter to integer arithmetic here (though I did for later methods) because it seemed simpler to just sum the new color intensity in the same loop as the weight calculations.
For the edge cases here, it seemed like unnecessary work to consider the image as a torus, so I used min and max to bound the index within the image. To compensate for normalization, the weights of those out-of-bound pixels are essentially added to the weight of the center pixel.
Sharpen was nearly identical to blur, except n is basically hard-set to 3 and the weights are hard-set to the predefined filter.
Likewise, edge detection was done using the same base algorithm, with n also set to 3. Here though, I used a separate Image object to store the gradient, which is summed using each pixel’s luminance and the given equation and weights.
Scale was implemented by sending the calculated u and v to the sample function. For the point filter above, this was done by simply converting the given u and v doubles to int and returning that pixel.
Hat’s filter was done by acquiring the floor and ceiling of the given u and v, and then calculating the weights of each based on the hat function. The weights for u and v are then multiplied at each pixel in the box.
Similar to hat, except here there are separate weights given to the 3x3 box around the center pixel and the 5x5 ring outside of the box.
The mitchell and hat implementations are slightly off; the weights are all calculated but I didn’t have time to add the farthest-off pixels in the box to the summation.