Memory Management in Swift: Common Issues

Stéphane Copin
Fueled Engineering
Published in
5 min readApr 13, 2018

This is the second article in a series of three articles regarding memory management in Swift. If you’re not sure how memory management works in Swift, the first article is right there! If however you’re looking to learn how to debug them, you can look over here.

This article will cover some common issues observed in real-world iOS projects related to memory management. Most of the issues are retain cycles, as it’s easily the most prevalent issue related to memory in Swift.

I’ve decided to list these from most common/easiest to find to least common/hardest to find. They are all commonly encountered in production code, and some of them take a significant amount of time to discover. If you’ve had any retain cycle issues that some of them take a significant amount of time to discover, don’t hesitate to comment!

Delegate

It’s very easy to have retain cycles when using the delegate pattern, if delegates are not weak. The goal of a delegate for an object A is to delegate some of its decision to another object B — that means that A should listen to B, but not that it should own B. For 99% of cases, you will want to set your delegate to weak so as to avoid retain cycle

Timer/CADisplayLink

Repeating Timers and CADisplayLinks is another very common cause of retain cycles. This example contains a retain cycle:

The retain cycle here is that the Timer strongly references its target — but ChatRoom also does for the Timer, so the timer will run forever as the ChatRoom will never be deallocated because the timer will run forever. One solution is to use an intermediate WeakTarget class:

This snippet also removes the now superfluous NSObject superclass for ChatRoom. We can of course also abstract the WeakTarget class to be used anywhere, and add an extension on Timer to use it easily. This is done in this gist here.

The issue with CADisplayLink is extremely similar: it will also retain its target, and such an issue can be avoided (the gist above also contains an implementation for having a weak target on CADisplayLink).

Timer/CADisplayLink on background loops

We just talked about Timer and using CADisplayLink and running them on the main loop: working around their target being retained is simple.

Another complexity is introduced when trying to run them on background loops. For example, such a code, regardless of the weak target, has actually a retain cycle. Can you find it?

So, what’s happening there?

  1. We’re creating a display link like before, with a weak target to account for the strong reference that would otherwise be created
  2. We then launch a background thread that will run our display link for us, on the run loop for the current thread.

What’s happening here is that we have added our CADisplayLink to RunLoop.current on a separate thread, which is bound to self. This means that for as long as the CADisplayLink runs, self will be retained… never invalidating the CADisplayLink, and creating our retain cycle.

A solution here would be, rather than binding the thread to self, to bind it to the class itself, and pass the CADisplayLink as an object of that thread. The result would be:

Thread

You might want to run a thread for as long as an object lives, and stop it when the object is deallocated. The naive approach to doing could be the following:

This has the same issues has for the CADisplayLink example above: the object creates a thread, that retains the object, and the thread running method then retains self.

A solution here could to include a stop() method, which manually turns isRunning to false , but this results in a non-symetrical interface (i.e. the thread starts automatically, but must be stopped manually).

In order to make the thread stop on object deallocation, one could write:

Here, every time the while loop run, we try to unwrap weakTarget.worker , and if it still exists, we run its action.

Please note that these examples are not optimized at all, and will use 100% of a CPU core. If a constant FPS is not required, adding a line such as Thread.sleep(forTimeInterval: 0.1) would greatly decrease the CPU usage and the battery life.

Autoreleasepool

That’s something you might have never heard of. You will usually never have to explicitly use one, but time may come where one time it could be very useful. It is only useful when using Objective-C API in Swift — otherwise, it has no effect. It is mostly used for managing memory more efficiently in loops, and can lead to a hard-to-debug issue related to object not being deallocated when you expect them to. Especially when using Thread directly like the above, if you’re creating an Objective-C object at every loop, you’d want to explicitly add an autoreleasepool to make sure the objects are deallocated when you expect them to be:

This becomes even more important if the loop inside the Thread calls delegate methods: it then becomes your responsibility to make sure that the objects the delegate methods creates are deallocated in a sane manner, so an Out of Memory error doesn’t happen. This is important if you’re creating a library/framework that internally uses Thread s for example.

Dispatch Queues

You’ve probably used Dispatch Queues in swift (A part of GCD (Grand Central Dispatch), for example if you wanted to run a synchronous task in the background:

As a side note, there are better ways to handle asynchronous work than using completion handlers! Check out ReactiveSwift (what we use here at Fueled along with ReactiveCocoa), RxSwift or PromiseKit.

This will make sure that whatever longRunningTask does is done in the background (using the utility QoS (Quality of Service), you can read more on that here).

You can also create your own DispatchQueue. I won’t talk too much about them as this would deviate from the main subject of this article, but there’s plenty of good resources on the web about the various way you can create and use them. DispatchQueue has a parameter that specifies how autorelease objects are deallocated. In its full constructor, it is autoreleaseFrequency:

There are three values possible for autoreleaseFrequency:

  • .inherit: This inherits the value from the target queue, if it exists. If not, it will non-deterministically drain the autoreleasepool from times to times. This is the default value for manually created queues (these created using the constructor above, when not explicitely specifying the autoreleaseFrequency).
  • .workItem: For every work item given to the DispatchQueue, an autoreleasepool will be created before it is created and drained after it finishes executing.
  • .never : Never manage autoreleasepool for you. Please note that this doesn’t mean that autorelease objects are never released, as GCD relies on Thread which have their own autoreleasepool , but it’s still non-deterministic. This is the default value for global queues (these available via DispatchQueue.global(qos:)).

(source for the default values: https://github.com/apple/swift-corelibs-libdispatch/blob/master/dispatch/queue.h#L578-L591)

Keep this in mind when dispatching closures using Objective-C objects, you might want to explicitly add an autoreleasepool if you see memory spikes related to GCD use.

Using autoreleasepool does add some overhead, and allocating/freeing large objects might have a significant impact on performance (especially if done too often), which is why it could be beneficial to release autoreleased objects all at once rather than one by one, but it depends on the use case.

While it’s great to be able to identify memory issues as they are coded, it’s even better to know how to debug them if you missed them in the first place! The next article will focus on the tools Xcode provides us with to help with this.

--

--