iOS: A haiku on image processing using Swift

vishnu J
GreedyGame Engineering
7 min readFeb 20, 2019

At GreedyGame, we try to help Game Developers monetize without hampering the gameplay experience of the Gamer. We work towards creating a better ad ecoSystem by bringing native ads to mobile games and creating Ads people ❤️.

In this blog, we would like to take you on a short journey on various fancy operations on images in iOS using the Core Graphics and Core Image libraries.

What is Image Processing? Why is it important?

Images are everywhere. From a fancy Insta filter to a snappy Snapchat, or a wayward Google image search, image processing is everywhere.

It is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. It is a type of signal processing in which input is an image and output may be image or characteristics/features associated with that image.

Before we begin

iOS developers use a number of different programming interfaces to render graphics on the screen. UIKit and AppKit have various image, color and path classes. Core Animation lets you move layers of stuff around. SpriteKit lets you animate. AVFoundation lets you play video. And working with graphics on a low level can get a bit tricky to process.

But luckily for developers, Apple provides an abstraction/interface for interacting with lower level graphics processes. Core Image and Core Graphics frameworks are few of these abstractions.

Core Image

Core Image is one of these abstractions, and it is a library you will run across frequently when coding in Apple’s ecosystem. From editing static images to using filters on live video content, Core Image is versatile enough to handle a variety of use cases.

Core Image is an image processing and analysis technology designed to provide instant processing for stills and video images. It operates on image data types from the Core Graphics, Core Video, and Image I/O frameworks, using either a GPU or CPU rendering path.

Core graphics

Core Graphics is a fairly big API, covering the gamut from basic geometrical data structures (such as points, sizes, vectors, and rectangles) and the calls to manipulate them, stuff that renders pixels into images or onto the screen, all the way to event handling. You can use Core Graphics to create “event taps” that let you listen in on and manipulate the stream of events (mouse clicks, screen taps, random keyboard mashing) coming into the application.

Different Image types in iOS

Before getting into the workings of the above framework, here is a look at the different types of images in iOS:

UIImage — An object that manages image data in your app.This object as a high-level way to display image on UIImageView.CIImage — A representation of an image to be processed or produced by Core Image filters. It only has the image data associated with the image and it has all the necessary information to produce the new image.CGImage — It only represents the bitmaps. Operations like cropping, masking, filters using CIFilter in core graphics will return CGImage/bitmap.

Now lets put these frameworks in action.

List of Operation:

I am going to demonstrate the following list of simple operations on the image.

  • Blur an image
  • Applying opacity on an image
  • Text composition (Rendering text on top of an image)
  • Image composition (Rendering image on top of an image)
  • WaterMarking

i) Create Canvas:

Before doing any operation we need to define the bounds of the output image by creating a Canvas.

Before rendering any image we have to define the area and within that area, we should render the image over on it. For example, an artist will use the drawing sheet with the particular size required for paint/draw the picture. In our case, the sheet will be the canvas. Canvas can be created by using ‘UIGraphicsBeginImageContextWithOptions’ function.

UIGraphicsBeginImageContextWithOptions(size, opaque, scaleFactor)

This method creates an area with the given size, enable/ disable the opacity according to the opaque boolean value and sets the given scale factor for the image.

size — CGRect value to specify the size of the canvas.opaque — Boolean value to specify whether the output will be transparent or not.scaleFactor — This value would enlarge or reduce the physical size of the image by changing the number of pixels it contains.

ii) Render Background Image:

In the last method, we have created the canvas. Now we are going to see how to render a background image inside the canvas. The UIImage has “draw(in:)” method to render the image with the given CGRect values. This draw method will find the currently available context in graphics stack to process the image.

UIGraphicsGetImageFromCurrentImageContext will generate and return the new image with the current context content. Then you must call the UIGraphicsEndImageContext() to clean up the current context from the rendering environment. This pops the current context from the graphics stack.

Now we rendered the scenery image over the canvas using draw method.

iii) Apply Blur:

Now we are going to blur the output image from the last method using core image filters. Core Image has the number of built-in filters. From that, we are using “GaussianBlur” to blur the image. The filter mainly makes the changes on pixel data according to the radius value using “Gaussian Distribution function”. The CIImage type which has the information about the image. so we have to cast the input image to CIImage type and we have to pass the image in the filter with “kCIInputImageKey” key like below:

let blurFilter = CIFilter(name: "CIGaussianBlur")
let ciImage = CIImage(image: image)
blurFilter!.setValue(ciImage, forKey: kCIInputImageKey)

Before creating a filter, we should create the context using CIContext to process/analysis the input image and rendering it.

Also, we are using a crop filter to crop the image with aspect image size after blurred the image. Then pass the output image from crop filter in this “createCGImage(_: from:)” method using created context will generate and return the CGImage.

iv) Render Image over the blurred image with transparency:

Now render the Luigi image over a blurred image like the second step with the required size and position. But here we rendering the image with transparency(with opacity). so we need to enable the opacity in “UIGraphicsBeginImageContextWithOptions” method like below:

UIGraphicsBeginImageContextWithOptions(size, true, scaleFactor)

And Use the method “draw(in: blendMode: alpha:)” with blend mode and alpha value for transparency instead of “draw(in:)”.

image.draw(in: subImageSize, blendMode: .hardLight, alpha: 0.8)

v) Text Rendering:

Basically, iOS has plain text which is defined as a string type and attributed string which is defined as an NSAttributedString type. We can define the basic properties like font name, size, weight etc. in String. But NSAttributedString string which allows us to customize the text with its attributes to customize the text style, color, stroke color… etc. String attributes are just a dictionary in the form of “[NSAttributedStringKey: Any]”.

Another interesting thing to customize the text/paragraph is NSMutableParagraphStyle.The NSMutableParagraphStyle is used to define the text alignment, spacing, indent, linebreak mode, line height etc…The NSMutableParagraphStyle is used to render multiple line text.

Also, we need to cast the text from String to NSString type. Because NSString type has the ‘draw(in:withAttributes:)’ method, which allows rendering the text under the graphics context.

vi) Watermarking:

Finally, we render our logo over the image like the second step. You can replace with anything else that you like to impart the final signature on your newly created image!

Below image is sequential of the above steps. From this, you can understand the transition of the output image at every step:

The topic of image processing in itself is very detailed and vast. I’ve covered a small part of it to help a noob get started with image processing on iOS.

--

--