CSE 190: Image Manipulation


0.0 / 0.5 / 1.0
1.5 / 2.0

Brightness was implemented in the standard manner — linear interpolation between solid black (rgb(0,0,0)) and the image via a provided factor.


-1.0 / 0.0 / 0.5
1.0 / 2.0

Contrast was similarly carried out in a standard fashion, with linear interpolation between middle gray (defined in the SRGB color space as rgb(128, 128, 128)) and the image. Note that a negative factor leads to a photo negative.


-1.0 / 0.0 / 0.5
1.0 / 2.0

Saturation again is standard, with linear interpolation between a grayscale version of the image (calculated with relative luminance approximation) and the image itself. Notice how a negative factor leads to a color negative (overall luminance stays the same, as opposed to a photo negative produced by negative contrast).


2.0 / 1.0 / 0.66
0.33 / 0.00

Gamma was carried out with the standard exponential factor. Notice how the sample image contains pixels saturated enough to survive under zero gamma.


Crop of the eyes

Cropping works as you’d expect. One note is of how I chose to handle cropping zones that extended beyond the image — I chose to clip the zone to always fit the area of the image, which means resultant crops can be smaller than expected.


FS / Quantize / Random, 1 bin
FS / Quantize / Random, 2 bins
FS / Quantize / Random, 3 bins
FS / Quantize / Random, 4 bins
FS / Quantize / Random, 5 bins

Above is a progression of dithering across a single image. One important thing to note up-front is that due to image-resizing in your browser, the artifacts of Floyd-Steinberg dithering may not be immediately apparent. Click on any image view a larger version.

Quantize was done as expected, nothing special in its implementation. Random dithering was done via C++’s uniform_real_distribution<> to achieve random noise.

Floyd-Steinberg had one special consideration — how to handle kernels that extended past the edges of the image. To do this, I extended the edges of the image any time outside access was necessary, similar to how CLAMP_TO_EDGE appears when processing textures in OpenGL. Besides that, it was done in-place with the standard scanning operation.


3 / 5 / 7
11 / 15 / 21

Above is the steady progression of a gaussian blur across the image. For this, as well as all of the discrete convolution operations, I had a general purpose Convolute that operated via a parameterized filter.

Again, to handle the issue of off-edge kernels (which was even more important here), I extended the edges of the image. To handle the loss of energy due to both approximation and integer, I made sure to use destructive operations (such as normalization) as little as possible.

Note that due to the nature of this blogging platform, the above images may be upsampled to fit the available space, leading to some extra blurring not present in the original image. You should be able to right-click -> go to image to see the original.


A sharpened mandrill

Sharpening was carried out with the provided filter. No special considerations were made.

Edge Detection

Edge detection with a “threshold” of 128

Sobel edge detection was carried out across the RGB channels individually, not consolidating to a single luminance-based channel until the end. You may also notice that there isn’t a pure threshold filter applied on the result — it clips anything darker than the parameter to black, but anything above it allowed to keep its luminance. This is purely for visual aesthetic.


Hat / Mitchell / Nearest Neighbor
Hat / Mitchell / Nearest Neighbor
Hat / Mitchell / Nearest Neighbor
Hat / Mitchell / Nearest Neighbor

Above are examples of my scaling algorithm, with three various sampling filters. Again, make sure to view the original images for best results.

All scaling was code to operate on the axes independently, so technically one could even apply different sampling filters for each axis (this was not done).’


Hat / Mitchell / Nearest Neighbor non-integer shifts (40.75, -30.25)
Integer shift (no sampling necessary)

Above are the results of shifting with similar sampling filters. As the shed picture notes, no sampling is necessary for integer shifts, so special care is taken as to not do it.

Any pixels which are not present in the original image are forced to black explaining the black borders above).

Again, shifting treats the axes independently, so there is allowance for per-axis sampling filters.


For my fun filter, I decided to try and emulate the appearance of dirty, demagnetized VHSes, which often results in visual warping, as well as separation of the color channels.

To accomplish this I repurposed the shift filter from above, and had it operate on a per-row basis. To decide the shift amount, I did a random walk as we descended down the image so that the warping direction, would be relatively consistent from row to row (as it normally is). I also added a slight random separation of the RGB channels to add to the effect.

Finally, unlike every other example, I handled sampling outside pixels by treating the image as toroidal — this wrapping affect appears to happen in actual VHSes (as well as emulations of the effect) so it made sense to do it hear.

Strangely enough, the filter applied to the large checkerboard pattern on the right produced the common visual illusion of slanted rows due to the skewing of the checkerboard. Again, only horizontal shifts are present (no pixels were moved in the Y axis).