SwiftUI: BBMetal Filters(Part 1) — Support functions.

Menura Wijesekara
5 min readJul 9, 2024

--

Today let’s see how we can use the BBMetal filter in SwiftUI. BBMetalImage is a powerful, GPU-accelerated library for image and video processing on iOS and macOS, leveraging Metal for performance efficiency. It provides a wide range of filters and operations that can be applied to images and videos in real-time. BBMetalImage supports high-performance rendering and chaining of multiple filters, allowing developers to create complex visual effects. Its API is designed to be easy to use and integrates seamlessly with Swift and Objective-C. The library includes filters for basic adjustments (like brightness and contrast), colour adjustments, visual effects (like blur and sharpening), and even advanced image processing techniques such as edge detection and convolution operations.

You can download BBMetal filter and use it in your code. But before we use the BBMetal Filters we need to understand some other functions that we need to make this work.

First function is to convert an image to a texture. Let’s see how we can do this.

import UIKit
import MetalKit
import Accelerate

func uiimageToTexture(imageTexture: UIImage, device: MTLDevice) -> MTLTexture {


let image = imageTexture
if(true){
let options: [MTKTextureLoader.Option: Any] = [
.SRGB: true,
.generateMipmaps: true,
.textureUsage: MTLTextureUsage.unknown.rawValue
]

let loader = MTKTextureLoader(device: device)
let texture = try! loader.newTexture(cgImage: image.cgImage!, options: options)
return texture.makeTextureView(pixelFormat: .bgra8Unorm)!
}else{
var pixelBuffer: CVPixelBuffer?

let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue]

_ = UIScreen.main.scale

let width = Int(image.size.width)
let height = Int(image.size.height)

var status = CVPixelBufferCreate(nil, width, height,
kCVPixelFormatType_32BGRA, attrs as CFDictionary,
&pixelBuffer)
assert(status == noErr)

let coreImage = CIImage(image: image)!
let context = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!)
context.render(coreImage, to: pixelBuffer!)

var textureWrapper: CVMetalTexture?
var textureCache:CVMetalTextureCache?

_ = CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, device, nil, &textureCache)

status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
textureCache!, pixelBuffer!, nil, .bgra8Unorm,
CVPixelBufferGetWidth(pixelBuffer!), CVPixelBufferGetHeight(pixelBuffer!),
0,
&textureWrapper)


let texture = CVMetalTextureGetTexture(textureWrapper!)!

// use texture now for your Metal texture. the texture is now map-bound to the CVPixelBuffer's underlying memory.
context.clearCaches()
return texture
}
}

The function uiimageToTexture is designed to convert a UIImage to a MTLTexture using Apple's Metal framework, enabling GPU-accelerated image processing. The function takes a UIImage and a MTLDevice as inputs. It primarily uses two methods to achieve this conversion, but only the first method is executed due to the if(true) condition. In the first method, it utilizes MTKTextureLoader to create a texture directly from the UIImage's CGImage. The options specified for the texture loader include enabling sRGB color space, generating mipmaps for better texture scaling, and setting an unknown texture usage flag. The texture loader then creates and returns a texture view of the original texture with the specified pixel format.

The second method, enclosed in the else block, provides an alternative approach that involves creating a CVPixelBufferto render the UIImage. This method starts by setting attributes for the pixel buffer to ensure compatibility with Core Graphics and Metal. It then creates the pixel buffer and renders the UIImage into it using a Core Image context. A CVMetalTextureCache is created to manage the texture cache, and a CVMetalTexture is generated from the pixel buffer. Finally, the texture is extracted from the CVMetalTexture and returned. While this method is more complex, involving more steps and different frameworks (Core Video, Core Image, and Metal), it offers an additional way to convert an image to a Metal texture, although the direct MTKTextureLoader method is generally more straightforward and efficient for most use cases.

Now let’s see how we can do the opposite of this that is to convert the texture to an image. The reason for this is we can not display textures in SwiftUI.

import UIKit
import MetalKit
import Accelerate

func textureToImage(from texture: MTLTexture) -> UIImage? {

let width = texture.width
let height = texture.height
let bytesPerRow = width * 4

let data = UnsafeMutableRawPointer.allocate(byteCount: bytesPerRow * height, alignment: 4)
defer {
data.deallocate()
}

let region = MTLRegionMake2D(0, 0, width, height)
texture.getBytes(data, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)


var buffer = vImage_Buffer(data: data, height: UInt(height), width: UInt(width), rowBytes: bytesPerRow)

let map: [UInt8] = [2, 1, 0, 3] // original

vImagePermuteChannels_ARGB8888(&buffer, &buffer, map, 0)

guard let colorSpace = CGColorSpace(name: CGColorSpace.dcip3) else { return nil }
guard let context = CGContext(data: data, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow,space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue) else { return nil }
guard let cgImage = context.makeImage() else { return nil }

let canvasLayerImage = UIImage(cgImage: cgImage)
context.flush()
return canvasLayerImage
}

The textureToImage function converts a MTLTexture (a Metal texture) into a UIImage. The process begins by extracting the width, height, and bytes per row (which is width multiplied by 4, assuming a 4-byte per pixel format) from the texture. Memory is then allocated to store the pixel data using UnsafeMutableRawPointer. This data buffer is deallocated automatically when the function exits using the defer statement. The function defines a MTLRegion covering the entire texture and copies the texture data into the allocated buffer using texture.getBytes(data, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0). This step extracts the raw pixel data from the texture into a linear buffer.

Next, a vImage_Buffer is created from the raw data, encapsulating the image dimensions and pixel data. The vImagePermuteChannels_ARGB8888 function is used to reorder the channels of the image data from Metal’s default BGRA format to ARGB format, which is more commonly used in image processing. The permutation map [2, 1, 0, 3]indicates the reordering of the channels. The function then creates a CGColorSpace object for the color space and a CGContext to draw the image. The CGContext is configured with the necessary parameters, including the allocated data, image dimensions, bytes per row, color space, and bitmap info (premultiplied alpha). If the context and the resulting CGImage are successfully created, they are used to construct a UIImage. Finally, the function flushes the context to ensure all drawing operations are completed and returns the UIImage. This process effectively translates the GPU-rendered texture into a standard image format suitable for display or further processing in a UIKit-based application.

These are some of the support functions we need to use before using the BBMetal filters. In the next blog we will learn about the kernal function to modify the filters. Until that Happy Coding….

--

--