Image Virtual Crop in SwiftUI

Djenifer R Pereira
Apple Developer Academy PUCPR
9 min readJul 5, 2024
Photo by Glen Carrie on Unsplash

Cropping an image is not a big deal, since we can find a lot of examples in GitHub. We can use third-party libraries too. But what if I don’t want to really crop the image? If I just want to use a mask in the original image? How can I do this? Well, this was a challenge I faced, and I am here to tell you how I dealt with it.

When I was searching about cropping an image in Swift, all the results I got were literally editing the image. You change the position, scale, and/or rotation, and then you save the resulting image. That is not what I needed. So I had to write the virtual crop on my own. I tested some libraries to understand how a “hard” crop works. SwiftUI-Image-Cropper and PhotoSelectAndCrop helped me to understand the logic.

I will describe below what I have done to build the Virtual Crop. You can follow the steps or you can check the final code in Github. Each step here matches a commit in the project. They are marked with the SHA hash with its link to the repository. So, let’s start!

Creating a simple editor

First, create your SwiftUI project in Xcode. Then, change ContentView to show an image with scale and offset modifiers. Also, add the Drag and Magnification gestures. (commit 1ef9225)

struct ContentView: View {
@State private var uiimage = UIImage(named: "image-900-500")!
@State private var scale: CGFloat = 1.0
@State private var offset = CGSize.zero

var dragGesture: some Gesture {
DragGesture()
.onChanged { value in
self.offset = value.translation
}
}

var scaleGesture: some Gesture {
MagnificationGesture()
.onChanged { value in
self.scale = value.magnitude
}
}

var body: some View {
ZStack {
Image(uiImage: uiimage)
.scaleEffect(scale)
.offset(offset)
}
.gesture(dragGesture)
.simultaneousGesture(scaleGesture)
}
}

This code is simple, but it already has some problems. The gesture “flicks” the image when it is made far from the center.

A gif image showing the problems with the Drag and Magnification Gesture on the image in the simulator. On the left simulator, the image is dragged from the right to the left. When the gesture is made more close to the border of the simulator frame, the image jumps instead of slides. On the right simulator, the image is scaled from the center to the border and afterwards it is scaled from the border to the center. The last gesture makes the image pop in the screen in the new scale.
Left: Drag Gesture (offset), Right: Magnification Gesture (scale)

This problem occurs because the original variable state has changed from the beginning of the gesture. If the new value is too far from the original one, this “glitch” appears. For instance, if the original value is 1 and the gestures gives a value of 50, then the image "flicks". To fix this, we have to save the progression of the gesture in another variable state. When the gesture finishes, we save the progression in the original variable. Since we will use two variables, we must combine their value in the modifiers too. (commits 869886b and c12b52c)

    //*...*//
@State private var progressingScale: CGFloat = 1.0
@State private var progressingOffset = CGSize.zero

var dragGesture: some Gesture {
DragGesture()
.onChanged { value in
progressingOffset = value.translation
}
.onEnded { value in
offset = sum(value.translation, offset)
progressingOffset = .zero
}
}

var scaleGesture: some Gesture {
MagnificationGesture()
.onChanged { value in
progressingScale = value.magnitude
}
.onEnded { value in
scale *= value
progressingScale = 1.0
}
}

var body: some View {
ZStack {
Image(uiImage: uiimage)
.scaleEffect(scale * progressingScale)
.offset(sum(offset, progressingOffset))
}
.gesture(dragGesture)
.simultaneousGesture(scaleGesture)
}

func sum(_ a: CGSize, _ b: CGSize) -> CGSize {
return CGSize(width: a.width + b.width, height: a.height + b.height)
}
//*...*//

That is the basic logic for our editor. So, extract all the code from ContentView to a new view called CropperEditorView. The uiimage, scale, and offset will be Bindings since they are parameters to our editor. (commit b79a671)

    //*...*//
@Binding var uiimage: UIImage
@Binding var scale: CGFloat
@Binding var offset: CGSize
//*...*//

Also, create a new view called VisualizerAndCropperEditorView. This view shows the editor with a shape in overlay and the “result” image cropped with the shape. You can change the shape, width, and height to match what you want. (commit f405c57)

struct VisualizerAndCropperEditorView: View {
@State private var uiimage = UIImage(named: "image-900-500")!
@State private var scale: CGFloat = 1.0
@State private var offset = CGSize.zero

var body: some View {
VStack {
let width: CGFloat = 130
let height: CGFloat = 200

CropperEditorView(uiimage: $uiimage, scale: $scale, offset: $offset)
.frame(width: width, height: height)
.overlay {
Rectangle().stroke(.black)
}
.zIndex(10)

Spacer()

Image(uiImage: uiimage)
.scaleEffect(scale)
.offset(offset)
.frame(width: width, height: height)
.clipShape(Rectangle())
}
}
}
Gif image. Two examples of the visualizer. The left one scales and repositions the image to show the hidden mountain. The right one only repositions the image to show the sky only.

Using aspect fit in the editor

The editor already works, but the image opens in its original size. In this project, I want to open the image in aspect fit. Actually, it should open in aspect fit if it has no scale and offset values. We can apply the aspectRatio modifier to the image, but we have to recalculate the scale to match the size shown on the screen.

To recalculate the scale, we need the aspect fit scale value. To discover it, we use a GeometryReader in the background of the image. With the GeometryReader size, we can “cancel” the aspect fit scale in the scale variable state. (commit 4f67671)

// CropperEditorView.swift

//* ... *//
@State private var progressingOffset = CGSize.zero

@State private var aspectFitImageSize = CGSize.zero
private var aspectFitScale: CGFloat {
aspectFitImageSize.width / uiimage.size.width
}

//* ... *//

var body: some View {
ZStack {
Image(uiImage: uiimage)
.resizable()
.background(
GeometryReader { g in
Color.gray.opacity(0).onAppear {
aspectFitImageSize = g.size
scale = scale / aspectFitScale
}
}
)
.scaleEffect(scale * progressingScale)
.offset(sum(offset, progressingOffset))
.aspectRatio(contentMode: .fit)
}
.gesture(dragGesture)
.simultaneousGesture(scaleGesture)
}

//* ... *//

This modification makes our visualizer break. The editor’s internal values do not match the output.

Static image. Two examples of the problem. The left simulator shows the image in the visualizer too big, while the image in the editor is in its original size. And, in the right simulator, the image in the editor is too tiny just to show the visualizer image in a good size.
Visualizer broken due to aspect fit scale calculation

We have to refactor our editor to fix this issue. We need to separate what is input, output, and internal variables. Then, we have to update the output after internal calculations ended. The aspect fit problem is resolved using the inverse operation in the output scale. (commits 0bbe95d and fa9e0c7)

struct CropperEditorView: View {
var input: CropperEditorView.Input
@Binding var output: CropperEditorView.Output

@State private var scale: CGFloat = 1.0
@State private var offset = CGSize.zero

@State private var progressingScale: CGFloat = 1.0
//* ... *//

private var aspectFitScale: CGFloat {
aspectFitImageSize.width / input.uiimage.size.width
}
//* ... *//

var body: some View {
ZStack {
Image(uiImage: input.uiimage)
.resizable()
.background(
GeometryReader { g in
Color.gray.opacity(0).onAppear {
aspectFitImageSize = g.size

scale = input.scale / aspectFitScale
offset = input.offset
}
}
)
//* ... *//

.simultaneousGesture(scaleGesture)
.onChange(of: scale) { value in
output.scale = scale * aspectFitScale
}
.onChange(of: offset) { value in
output.offset = offset
}
}
//* ... *//

extension CropperEditorView {
struct Input {
var uiimage: UIImage
var scale: CGFloat
var offset: CGSize

static func from(_ uiimage: UIImage, scale: CGFloat = 1, offset: CGSize = CGSize.zero) -> Self {
return Self(uiimage: uiimage, scale: scale, offset: offset)
}
}

struct Output {
var scale: CGFloat
var offset: CGSize
}
}

Now, the image opens with its original size, even with the aspect fit modifier. But we have to show the aspect fit scale when scale and offset have no values. To achieve this, we use optionals in our editor input. (commit 9f1914f)

// CropperEditorView.swift

//* ... *//
Color.gray.opacity(0).onAppear {
aspectFitImageSize = g.size

if let inputScale = input.scale {
scale = inputScale / aspectFitScale
}

if let inputOffset = input.offset {
offset = inputOffset
}

output.scale = scale * aspectFitScale
output.offset = offset
}
//* ... *//

struct Input {
var uiimage: UIImage
var scale: CGFloat?
var offset: CGSize?

static func from(_ uiimage: UIImage, scale: CGFloat? = nil, offset: CGSize? = nil) -> Self {
return Self(uiimage: uiimage, scale: scale, offset: offset)
}
}
//* ... *//

Now, we have our editor working well. To finish it, I will just add a double tap gesture to restore it to the center and with an aspect fit scale. (commit 24e8b1a)

// CropperEditorView.swift

//* ... *//
var twoTapsGesture: some Gesture {
TapGesture(count: 2)
.onEnded {
offset = .zero
scale = 1
}
}

var body: some View {
ZStack {
Color.gray
//* ... *//
}
.gesture(dragGesture)
.simultaneousGesture(scaleGesture)
.simultaneousGesture(twoTapsGesture)
//* ... *//
Static image. Two simulators showing the image in aspect fit in the editor. The left simulator has the editor with a larger height. And the right simulator has the editor with a larger width. In both, the image is in aspect fit.

Creating the visualizers

We can have two situations: the crop size is static or dynamic. What does this mean? The static crop size means both the editor and visualizer have the same crop size. The dynamic crop size means you can use a crop size in the editor and another crop size in the visualizer.

The static/fixed size

The static (or fixed) size visualizer is straightforward since it is a copy and a refactor of our VisualizerAndCropperEditorView. To see the FixedSizeImageVisualizerView working, I also made a home view (FixedSizeHomeView). The home view code is not relevant, so it is just in the GitHub project. (commit 6936de1)

// ImageVisualizerView.swift
struct FixedSizeImageVisualizerView: View {
let imageInfo: FixedSizeImageInfo
private var size: CGSize { FixedSizeHomeView.cropSize }

var body: some View {
Image(uiImage: imageInfo.uiimage)
.scaleEffect(imageInfo.scale ?? 1)
.offset(imageInfo.offset ?? .zero)
.frame(width: size.width, height: size.height)
.clipShape(Circle())
.contentShape(Circle())
}
}

// ImageInfo.swift
struct FixedSizeImageInfo: Identifiable {
let id = UUID()
let uiimage: UIImage
var scale: CGFloat?
var offset: CGSize?
}

In this scenario, the editor is straightforward too. We just have to use the same size in the mask overlay. Below we have the EditImageView and NewImageView code. (commit 2ca48fb)

// EditImageView.swift
extension FixedSizeImageInfo {
func toEditorInput() -> CropperEditorView.Input {
return .init(uiimage: uiimage, scale: scale, offset: offset)
}
}

struct FixedSizeEditImageView: View {
let info: FixedSizeImageInfo

//* ... *//

@State private var output = CropperEditorView.Output(scale: 1, offset: .zero)
private var size: CGSize { FixedSizeHomeView.cropSize }

var body: some View {
ZStack {
CropperEditorView(input: info.toEditorInput(), output: $output)
Circle().stroke()
.frame(width: size.width, height: size.width)
}
.ignoresSafeArea()
.navigationTitle("Edit Image")
.navigationBarTitleDisplayMode(.inline)
.toolbar {
ToolbarItem(placement: .navigationBarTrailing) {
Button("Save", action: save)
}
}
}

//* ... *//
}

// NewImageView.swift
struct FixedSizeNewImageView: View {
//* ... *//

@State private var output = CropperEditorView.Output(scale: 1, offset: .zero)
private var size: CGSize { FixedSizeHomeView.cropSize }

private var uiimage: UIImage { /* ... */ }

var body: some View {
ZStack {
CropperEditorView(input: .from(uiimage), output: $output)
Circle().stroke()
.frame(width: size.width, height: size.width)
}
.ignoresSafeArea()
.navigationTitle("New Image")
.navigationBarTitleDisplayMode(.inline)
.toolbar {
ToolbarItem(placement: .bottomBar) {
Button("Change Image") {
//* ... *//
}
}

ToolbarItem(placement: .navigationBarTrailing) {
Button("Save", action: save)
}
}
}

//* ... *//
}
Gif image showing how the crop works. Three cases are demonstrated here. First case: Add a new image. It opens a big dog image. The image is zoomed out and repositioned, so the dog face stays in the circle. Then, it saves the image. Second case: The previously added dog image is opened. It reposition the image and cancels the operation. Third Case: An existing image is opened. It was showing only the sky. The image is repositioned to show the mountain. Then, it saves the image.

The dynamic size

If we use a different crop size in the static visualizer, it does not rescale the image. For this case, we have to use the dynamic size visualizer.

Gif image showing the problem of static size with different sizes. Two cases are shown. First: A dog image is added. It is repositioned to show only the eye of the dog, but the visualizer shows its nose too. Second: The mountain image is repositioned to the border, but the visualizer shows an empty space due to visualizer bigger size.

We can create the dynamic visualizer by copying the static size and making some changes. First, we need to save the crop size from the editor into the image info model. (commit 2a65b68 and 727093a)

struct DynamicSizeImageInfo: Identifiable {
let id = UUID()
let uiimage: UIImage
var scale: CGFloat?
var offset: CGSize?
var cropperSize: CGSize? = nil
}

With the crop size, we can calculate the rescale factor for our visualizer. (commit f17e203)

struct DynamicSizeImageVisualizerView: View {
let imageInfo: DynamicSizeImageInfo
private let value: CGFloat = values.randomElement()!
private var size: CGSize { CGSize(width: value, height: value) }

var body: some View {
Image(uiImage: imageInfo.uiimage)
.scaleEffect(imageInfo.scale ?? 1)
.offset(imageInfo.offset ?? .zero)
.scaleEffect(getReScaleFactor() ?? 1)
.frame(width: size.width, height: size.height)
.clipShape(Circle())
.contentShape(Circle())
}

func getReScaleFactor() -> CGFloat? {
guard let cropperSize = imageInfo.cropperSize else { return nil }

let scaleHeight = size.height / cropperSize.height
let scaleWidth = size.width / cropperSize.width

return (scaleHeight + scaleWidth) / 2
}
}

And that is how I built a Virtual Crop in SwiftUI. It was very challenging to build these crop features, but it was fun too. This Virtual Crop is a simplified version of the memoria Virtual Crop.

A sneak peak of memoria Virtual Crop in action

My team and I developed memoria at Apple Developer Academy Curitiba. I loved so much being part of this project!

Erick and Ana also wrote an article about memoria (in Portuguese).

--

--