Core Image: using filters

Ilana Concilio
Academy@EldoradoCPS
5 min readFeb 4, 2020

In image processing, techniques are applied to modify or interpret still or video images — in this case, both the input and the output are images. Digitally, an image is nothing more than a matrix of size “height x width” formed by a set of pixels, which is the unit of image. As stated in the previous post, Core Image is the framework that allows this processing almost in real time in iOS applications.

So, let’s learn a little more about Core Image?

First of all, don't forget to import Core Image to your project :)

It is important to note that CoreImage and UIKit use different coordinate systems. The origin of a UIView is in the upper left corner and its coordinates are in the superview space. In Core Image, each image has its origin in the lower left corner.

All processing of a Core Image is done in a CIContext, an object that allocates the necessary memory, compiles and executes filters, rendering processing results and performing image analysis. The creation of a CIContext allows a more precise control of the rendering process and the resources involved.

It is not always necessary to create your own CIContext object. When using the UIImage constructor (CIImage:) it does all the work — it creates a CIContext and uses it to perform the image filtering work. In this way, CIContext is created every time it is used and they are very heavy objects, very expensive to create. Therefore, it is recommended to create a context as soon as possible in your code and reuse it whenever necessary — the CIContext and CIImage objects are immutable, therefore, several threads can use the same CIContext object to render CIImage objects (Apple, 2019 ).

let context = CIContext()

In image processing, we can apply algorithms to the input image to obtain a new image as an output, known as filters.

Core Image filters require CIImage type input images. An instance of CIImage is an immutable object that represents an image and is an object that does not directly represent the image’s bitmap data. To apply a filter, create one or more CIImage objects representing the images to be processed by the filter and assign them to the filter’s input parameters.

A CIFilter object represents the filter to be applied. You need to instantiate a filter object using the name of a filter known to the system.

let filterName = CIFilter(name:"FilterName")

After creating an instance of the filter, you need to provide values for its parameters. Most filters have one or more input parameters that allow you to control how processing is performed.

The filter parameters are defined as key-value pairs. The key is a constant that identifies the attribute and the value is the configuration associated with the key. For example:

filterName?.setValue(inputImage, forKey: kCIInputImageKey)filterName?.setValue(0.9, forKey: kCIInputIntensityKey)

It’s also possible to use the valueForKey: method.

It is now necessary to create a CIImage object representing the filter output. It is important to note that the filter has not yet been executed at this point — the image object is a “recipe” that specifies how to create an image with the specified filter, parameters and input. Since a CIImage object describes how to produce an image (instead of containing image data), it can also represent the filter’s output. Filters have a property called outputImage of type CIImage, and when you access that property, the main image only identifies and stores the steps necessary to run the filter. These steps are performed only when you request that the image be rendered for display or output (Apple, 2016).

let finalImage = filterName?.outputImage

You can request rendering explicitly, using one of the CIContextrender or draw methods, or implicitly, displaying an image using one of the many system structures that work with the main image, for example:

let cgOutputImage = context.createCGImage(finalImage!, from: inputImage!.extent)

In this way, we asked CIContext to create a CGImage from the extension (same dimensions) of the input image. The reason we use the dimensions of the input image is that the output image generally has different dimensions than the input image.

Using the image below as input, see code examples and results of some filters:

Input Image

Sepia Filter (CISepiaTone)

let sepiaFilter = CIFilter(name:"CISepiaTone")sepiaFilter?.setValue(inputImage, forKey: kCIInputImageKey)sepiaFilter?.setValue(0.9, forKey: kCIInputIntensityKey)let sepiaCIImage = sepiaFilter?.outputImagelet cgOutputImage = context.createCGImage(sepiaCIImage!, from: inputImage!.extent)
Output Image - Sepia Filter

Bloom Filter (CIBloom)

let bloomFilter = CIFilter(name:"CIBloom")bloomFilter?.setValue(inputImage, forKey: kCIInputImageKey)bloomFilter?.setValue(0.9, forKey: kCIInputIntensityKey)bloomFilter?.setValue(10, forKey: kCIInputRadiusKey)let bloomCIImage = bloomFilter?.outputImagelet cgOutputImage = context.createCGImage(bloomCIImage!, from: inputImage!.extent)
Output Image — Bloom Filter

Median Filter (CIMedianFilter)

let medianFilter = CIFilter(name: "CIMedianFilter")medianFilter?.setValue(inputImage, forKey: kCIInputImageKey)let medianCIImage = medianFilter?.outputImagelet cgOutputImage = context.createCGImage(medianCIImage!, from: inputImage!.extent)
Output Image — Median Filter

Color Invert Filter (CIColorInvert)

let invertFilter = CIFilter(name: "CIColorInvert")invertFilter?.setValue(inputImage, forKey: kCIInputImageKey)let invertCIImage = invertFilter?.outputImagelet cgOutputImage = context.createCGImage(invertCIImage!, from: inputImage!.extent)
Output Image — Color Invert Filter

Edges Filter (CIEdges)

let edgesFilter = CIFilter(name:"CIEdges")edgesFilter?.setValue(inputImage, forKey: kCIInputImageKey)edgesFilter?.setValue(0.9, forKey: kCIInputIntensityKey)let edgesCIImage = edgesFilter?.outputImagelet cgOutputImage = context.createCGImage(edgesCIImage!, from: inputImage!.extent)

Grayscale Color Filter

To convert color model to grayscale, there’s no specific filter for this. However, it’s possible to play with the color saturation value using CIColorControls. If you set this parameter to zero, you will convert your image to grayscale model.

let filter = CIFilter(name: "CIColorControls")filter!.setValue(inputImage, forKey: kCIInputImageKey)filter!.setValue(0.0, forKey: kCIInputSaturationKey)let grayscaleCIImage = filter?.outputImagelet cgOutputImage = context.createCGImage(grayscaleCIImage!, from: inputImage!.extent)
Output Image — Grayscale Color Conversion

In addition to setting the saturation key value to zero, it’s possible to obtain a black and white image if you also increase the contrast key value.

filter!.setValue(5.0, forKey: kCIInputContrastKey)

See other filters in the Apple Documentation.

References

Apple Inc. (2016). Core Image Programming Guide. Available at https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_intro/ci_intro.html#//apple_ref/doc/uid/TP30001185

Barelli, Felipe. (2018). Introdução à Visão Computacional . Casa do Código.

Webb, Christopher. (2017). Diving Into Core Image — Part One. Available at https://medium.com/journey-of-one-thousand-apps/diving-into-core-image-part-one-39f83f0ceb2f

--

--