Custom Image Filters in iOS


I was recently working on applying some filters to a UIImage. While going through the documentation of Core Image, I found that there are 115 pre-defined filters made available to us by Apple and one of them did exactly what I wanted. But, while reading further I found that you can also create your own custom filters!

A little bit about Core Image

Core Image is Apple’s image processing framework which uses GPU and CPU to perform computation and hide the “behind the scenes” from the developer using this framework. This is an amazing framework which makes image processing extremely easy. With 115 filters available at your disposal and the ability to chain multiple filter unfolds a gamut of options in front of the developer.

Let’s try to solve a problem!

For creating a simple filter, we will solve a simple problem. You may have seen this optical illusion before:

The idea is that if you look at this image from a moderate distance, you will see black dots at the intersections. Looking at the image hurts my brain. But, do not worry as shortly we will be uncovering the truth behind this brain-hurting devil imposter!

What if we rip it off its treacherous black dots by trapping them in a filter!

Recipe for the blackDotTrapFilter

  • An input image in the form of a CIImage
  • A kernel function

Steps for creating our blackDotTrapFilter

Create a CIFilter subclass

The interface at the minimum declares the inputImage property. It is important that we name it “inputImage” because it gets mapped with the input key, kCIInputImageKey.

#import <CoreImage/CoreImage.h>

@interface BlackDotTrapFilter : CIFilter

@property(nonatomic, retain) CIImage *inputImage;


Implement the kernel and apply it on the inputImage

  • A kernel is a representation of computation in code which is performed per-pixel. The Core Image framework uses a subset of GLSL to represent kernel.
  • outputImage is a property of the CIFilter class and we implement its getter method in our implementation wherein we apply the kernel method to the input image.

The implementation of the BlackDotTrapFilter is as follows:

#import "BlackDotTrapFilter.h"

@implementation BlackDotTrapFilter

static CIColorKernel *kernel = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
kernel = [CIColorKernel kernelWithString:
@"kernel vec4 CustomFilter1 ( __sample s ) \
\n { \n if ( s.r + s.g + s.b < 0.1 ) \n \
{ return s.rgba = vec4(1.0, 1.0, 0.0, 1.0); } \
\n else \n { return s.rgba; } \n }"];

return kernel;

return [[self kernel] applyWithExtent:_inputImage.extent


What’s going on in the kernel?

The kernel function is essentially this:

kernel vec4 BlackDotTrapFilter ( __sample s )
if ( s.r + s.g + s.b < 0.1 )
return s.rgba = vec4(1.0, 1.0, 0.0, 1.0);
return s.rgba;

We pass in the value for the first and the only parameter(you can have more parameters) of the kernel through the arguments parameter of the method: – applyWithExtent:arguments: on CIColorKernel.

We can roughly say that if the sum of the red, blue and green values of a pixel is less than 0.1, then, that pixel must be of black color. And, this will help us determine if there really are black dots in the image. So, if a pixel is computed to be black, we return a yellow pixel(vec4(1.0, 1.0, 0.0, 1.0)) back or else we simply return the same pixel back.

Ok. Now, we have created our own trap filter. Let’s now subject the culprit image to this trap.

CIFilter *filter = [CIFilter filterWithName:@"BlackDotTrapFilter"];
CIImage *hocusPocus = [[CIImage alloc] initWithImage:
[UIImage imageNamed:@"spotTheBlackDot"]];
[filter setValue:hocusPocus forKey:kCIInputImageKey];
UIImageWriteToSavedPhotosAlbum(_realImageView.image, nil, nil, nil);

And, here’s the image after applying our custom filter!

We see that there weren’t any black dots really and we have proved it now by using our custom filter!

Yay! We have solved the case!

You can find the example code here.