What’s the Delegation Protocol in SwiftUI?

Do SwiftUI and Delegation Make Sense?

Danny Bolella
Oct 11 · 6 min read

SwiftUI is defined by two core characteristics: declarative and reactive. The latter is primarily achieved through the new binding property wrappers (State, ObservableObject, etc) available for their unique purposes. Putting those structures into practice with our Data-Dependent Views can seem simple enough. Understanding the limits of each, even with documentation, may not be immediately clear.

This is especially true when it comes to how these bindings relate to older Swift patterns and features, in general. There’s a case to be made that between SwiftUI and Combine, there’s a new approach to Swift app architecture (which I will semi-ignore until the end). It wouldn’t be the first time something new in Swift caused a disruption, either. Just look at Protocol-Oriented Programming.

Change doesn’t erase years of Swift code already in existence. As I write more complex SwiftUI demos, I do consume open-sourced code, pods, and packages. When I do, I’m finding that when the old is faced with the new, we do find instances where resolutions and design choices will need to be made.

Scenario

Recently, I was working with SFSpeechRecognizer in SwiftUI and realized that part of my implementation involved working with a protocol that had previously been consumed by a ViewController. Instinctively, I created a new class (ClosedCaptioning) to host the recognition code and conform to the protocol (VideoMediaInputDelegate) as a delegate so that the audio buffer will be fed to my recognizer. I also made this class conform to ObservableObject so I could wrap a property as Published, feed that property the transcription results, and then use it as a binding property back in my UI to be displayed.

At the end of the day, this is what I needed. Buffer to the recognizer, transcription to UI.

This was a workable solution as well as a design I was comfortable with since it left the UI quite readable and modularized my recognition/captioning code.

As I looked at the result, I started to wonder about the audio buffer protocol and using View as the delegate, just like ViewController had been in a past life. Could it turn out looking similar? What does Delegation even look like in conjunction with SwiftUI? Should I have set up another protocol to stream the transcription instead of a binding?

In essence: Did I need to take a reactive approach just because it’s available or should I stick with some form of Delegation?

I decided that the best way to handle this was to try a few alternatives and, if they worked, judge how Swifty they were.

Alternative 1: Setting View as the Buffer Delegate

First, I had to remove the class the only distinction from the delegate to allow my View (which is a struct) to conform to it, which I also set. I then copied the delegate implementation as well as my speech recognition setup from ClosedCaptioning and put them into my View. Lastly, I set up a State var that my transcription would update and in turn update my displayingText.

The app built successfully and ran. But when it did, it almost immediately ran into a crash. Perplexed, I checked out the console and saw the following:

error: Accessing State<String> outside View.body

Looking at my implementation, SFSpeechRecognizer uses a completion handler that is called whenever there’s an update to the transcription. It was in that closure that I was attempting to directly set my caption State. However, this closure is considered to be running outside the View, and accessing State is strictly prohibited. In other words, I had found yet another case where State draws the line.

Since State has this rule in place, the workaround was to write a quick little ObservableObject class with a Published String, which is then able to be set by the closure and bindable with Text.

Looking at this solution, I was not a fan. Yes, I got the View to consume the protocol and I was passing the buffer along just fine. However, considering I made a class just to bind the transcription, it felt like I achieved nothing. Pulling out the recognition code into a larger class felt way more sensible, cleaner, and valuable.

Having said that, it did teach me that Views can conform to protocols. And that got me thinking…

Alternative 2: Using a Protocol To Send Captioning

Why don’t we go back to using ClosedCaptioning for the recognition functionality? This time, instead of having it conform to ObservableObject with a Published the property, we can try passing the recognition results to a new protocol.

From there, we have our View conform to that new protocol. We keep our CaptionCollector and our Text binding to it from Alternative 1, but now set it using our protocol.

This, unsurprisingly, works. We knew View can conform to (non-class restricted) protocols and that ObservableObject can be set “outside” of its parent (even if it is the delegate).

This Should Be the Protocol

Performance-wise, perhaps I should determine which of these routes is the most efficient. However, I said that I would be judging based on Swiftness. That classification, to me, includes the classic metrics like cleanliness, readability, and everything that comes with those. But it also means that it should look like Swift, smell like Swift, and conform to “traditional” Swift patterns.

Alternative 2 wins the throne… for now. Photo by William Krause on Unsplash

That’s why, in this scenario, I hand the crown to Alternative 2. Three aspects of this route stuck out to me.

Not only does this route allow the majority of relevant code to live in respective classes, it continues the pattern of using a protocol/delegate already being exercised in part by VideoMediaInput. Since this is the case, and the two classes work tightly together, then it makes sense to stick with it.

Does it mean the rest of our app must conform to this pattern? No. But for this tightly coupled area of code, it just makes sense.

Let’s say I wanted to extract ClosedCaptioning and/or VideoMediaInput into packages. Using SwiftUI/Combine property wrappers would actually not be Swifty at all for projects that don’t use them already (specifically any project that wants to run pre-iOS 13).

True, you may have a specific architecture and pattern for your app. However, if there’s ever the potential to extract packages/pods/frameworks for reuse, it may still be valuable to conform to more openly consumable patterns. That’s even if it would be for internal use only.

Lastly, it would’ve been great to use State wrappers with closures and protocols. However, I respect the protections surrounding it. As a matter of fact, writing an ObservableObject literally (in this case) adds just three lines of code.

Yet, it makes the property that much more readable as it almost self-declares its purpose for dealing with outside influences. That distinction is actually quite convenient.

Near-Future Alternative 3: Combine

Moving forward, Combine will spread. When it does, Delegation could become more scarce. As a matter of fact, the answer still might not be making the class an ObservableObject with a Published property. Publishers and Subscribers will prove to be the popular route over Delegation because it gives the Subscriber completions and operators, not just the updated values.

While eager to look ahead, it would be wise to be mindful of the roads that led us here. Photo by Justin Luebke on Unsplash

As I mentioned earlier, Combine and SwiftUI are still only compatible with iOS 13+. Both are still very young and saw numerous revisions with each beta over Summer 2019. Adopting them into your production project may require staying up to date with numerous maintenance work or it could just be premature for your project.

If you decide to go for it anyway, then keep in mind you may still need to deal with code that conforms to older patterns. Know that they still work, you may just need to honor them alongside new ones.

Disclaimer: As mentioned in some of my other articles, this is not gospel. As a matter of fact, it’s quite situational. There’s probably a number of other alternatives out there I could’ve considered, which I welcome you all to share in the comments. I’m always open to learn and improve!

P.S.- To read about the project, itself, and how it works as a whole, check out my article below:

Flawless iOS

🍏 Community around iOS development, mobile design, and marketing

Danny Bolella

Written by

Software Engineer | Scrum Master | Writer | Reader | Husband/Father

Flawless iOS

🍏 Community around iOS development, mobile design, and marketing

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade