A beginner’s guide to developing Custom Intent Siri Shortcuts for iOS 12
Being able to expose any functionality of your app to Siri is super powerful and a huge step towards push interfaces. Together let’s take a look at how we can do just that for our own apps.
To prepare myself I watched the relevant WWDC videos (Introduction to Siri Shortcuts, Building for Voice with Siri Shortcuts and Siri Shortcuts on the Siri Watch Face) and looked at the provided example app from Apple but it was all still a little tricky to get started. What I wanted was a tutorial with every step. But I couldn’t find one and so I’m writing it now that I’ve stumbled my way through it.
The example app that we’re going to add the shortcut to has just one feature. It shows you NASA’s Astronomic Picture of the Day which it get’s from NASA’s API. It’s very basic. Here is how it looks.
If you want to follow along, checkout the master branch on the example repo and start from there.
The idea for this app comes right out of Apple’s “App Development with Swift” book (Chapter 5.6 ”Working with the Web: Concurrency”). So if you want to read up on that, here you go:
Read a free sample or buy App Development with Swift by Apple Education. You can read this book with iBooks on your…itunes.apple.com
Let’s get started
Now let’s make the feature of looking up Today’s Astronomic Picture of the Day available outside of our app. Let’s dive right in.
Enable the Siri Capability
First select your project in the navigator and open the “Capabilities” tab. Scroll down to the “Siri” capability and turn it on.
We want to run the intent outside of the app on its own so we need to add two new targets to app. The “Intents Extension” and the “Intents UI Extension”. Start by going to “File → New → Target” and choose “Intents Extension”.
Choose a name for the extension (I named it “SpacePhotoIntent”) and be sure to keep the “Include UI Extension” checkbox checked which will guide you through adding the second target right after this one. Also accept the wizards suggestion to activate the schemes for the new targets.
Intent Definition File
Next we want to add the “SiriKit Intent Definition File”. This is where we will define our custom intent.
Go to “File → New → File” and scroll down to the Resource section and select the “SiriKit Intent Definition File”.
With the new file selected click the little “+” button at the bottom and select “New Intent”.
Now that we have an Intent let’s fill it with content. Here is how this looks like.
On the left in the navigator we have given our intent a name of “PhotoOfTheDay”. In the main area on the top we chose a category that best describes our intent. “View” is perfect for our use case. This category will make it easier for Siri to talk about the intent. Next up is the title and a description of our intent. Since our intent doesn’t do anything destructive or a payment or similar we can leave the “User confirmation required” checkbox off. We can skip the “Parameters” section since our intent does not have any input parameters. That also means that the “Shortcuts” section is very simple because we have only one shortcut. If we had parameters we could define different shortcuts for different combinations of parameters. One last important thing at the bottom: The “Supports background execution” checkbox should stay checked. It means that this shortcut can be run outside our app. This is where the power of the intent shortcuts comes from. Our intent could be initiated from the Apple watch or event from the HomePod (Although since we’re looking at a photo the HomePod does not make too much sense).
Next up choose “Response” in the navigator. Here we can adjust how Siri will respond when our intent is executed.
Apart from showing the photo in a UI we also want to give the user textual / spoken feedback. In case of a success we want it to say “Today’s photo of the day shows [the title of the photo]”. For that we define a property
photoTitle at the top that will later hold the title of the photo. For the failure case we are quite vague but that’s ok for this demo. We also define a different failure case where we are more specific. The photo of the day sometimes can actually be a video. Since we don’t handle displaying a video the
failureNoImage template will be our response. Now we’re almost all set to dive into some code but there is one more thing we have to do. Open the right sidebar and make sure that all our targets’ (the app, and both extensions) checkboxes are on and that “Intent Classes” is selected like so:
This is important because Xcode will take this Intents Definition file and code generate a Class and a Protocol for each Intent we define. These classes will need to be accessible in all our targets. Now we’re ready to write some code.
Handling the intent
When we added the “Intent Extension” target Xcode created a file called “IntentHandler.swift” inside a “SpacePhotoIntent” (if that is what you called the target) group. All intents come through here and and need to be handled. Replace the content of the already existing
handler(for:) method with the following:
We make sure that the intent that is coming through is the one that we can handle by checking if it is a
PhotoOfTheDayIntent. This is the class that has been code generated from our Intents Definition File (Because we named the intent “PhotoOfTheDay”). We still have to create the
PhotoOfTheDayIntentHandler class that we return an instance of at the end. Let’s add a new file and call it
PhotoOfTheDayIntentHandler.swift with the following code inside:
Our new class conforms to the
PhotoOfTheDayIntentHandling protocol which was also code generated for us from the Intents Definition File. The protocol defines a mandatory
handle(intent:completion:) method and an optional
confirm(intent:completion) method. The latter will be called by the system so that we can make sure we have everything we need to execute the intent. We fetch the information about the photo of the day through our
PhotoInfoController (that was already part of the app) and make sure that it has an image and not a video. If it does we call the completion handler with a code of
.ready otherwise we use our custom failure code .
failureNoImage that we defined in the Intents Definition File earlier.
handle will be called when it’s actually time to execute the intent. We fetch the
photoInfo again (some local cache for these calls would of course be great in a real app) and pass the
title to the
PhotoOfTheDayIntentRepsonse.success(photoTitle:) method, which, again, was auto generated for us from the Intents Definition File. One thing we have to do is add our existing model related files to our new targets, otherwise they can not been seen. So check both new Target Membership checkboxes for the highlighted three files:
Donating the Shortcuts
Now we need to let the system know when the user performed the “Viewing the photo of the day” interaction in our app. Depending on how often the user does this and what time of the day etc. Siri will then suggest this interaction to the user. The user will also be able to access these shortcuts by themselves though (btw. if you want more information on how exactly Shortcuts work in iOS 12 read Federico Viticci’s great overview piece). In our existing
ViewController add the following.
Don’t forget to
import Intents at the top. Then define a
donateInteraction() method and call it from
viewDidLoad. Inside you create our intent, and with that intent you create an
INInteraction instance. Then call that instances
donate method logging either success or failure.
Let’s run what we’ve built so far
Build and run your app on an iOS 12 device or simulator. Open the “Astronomic Picture of the Day” inside the app. Now we’ve donated the shortcut.
There are a couple of ways to access those shortcuts now. First you can go to “Settings app → Siri & Search”. At the top you’ll find all donated shortcuts. When you tap on our “Photo of the day” Shortcut that we just donated you are invited to record a phrase to invoke it. When we donated the shortcut we set
interaction.suggestedInvocationPhrase = "Energize" and that is why it is suggested here.
Once you’ve recorded this phrase try it out (hit
Shift-Alt-Cmd-H if you’re in the Simulator).
This looks great! Well it sounds great at least. We have our custom response with the title of the photo and we have an empty UI underneath it. That is because we already added the IntentUI target with a default view controller implementation that is just empty with the maximum possible height. Before we jump into creating the UI let’s explore the other ways to launch our shortcut.
There are two relevant settings while developing that you might want to turn on. Go to “Settings app → Developer” and scroll to the bottom. Turn on both “Display Recent Shortcuts” and “Display Donations on Lock Screen”. Now on the home screen if you pull down you should see our shortcut. If it isn’t you can start typing “Photo of the day” and it should appear. When you tap on it you can see the UI portion of our shortcut being activated. So we really do need a UI because it can be shown without the spoken Siri context.
One last thing to help development. Now that we have a custom phrase to invoke our shortcut we can get there much faster. Back in Xcode select the
SpacePhotoIntent in the Run bar and the Edit its Scheme.
Type your recorded query into the “Siri Intent Query” box. Now when you run your Intent target directly Siri will open on the device or simulator with your phrase pre-filled so that you shortcut query will launch immediately.
Add the missing UI
To do that we’re going to change the code inside of the
IntentViewController.swift file that has been created for us when we added the IntentUI target. This is how the default implementation looks like:
The relevant method here is
configureView. It gets passed many arguments to allow us to get many details on how we should configure our view. The last argument is a completion handler that expects to be called with 1) a boolean indicating whether configuring the view was successful, 2) the set of parameters the configuration was successful for and 3) a desired size for that view.
But let’s start by creating our UI in the accompanying Storyboard. It’s quite simple. We have an image view stretched to the whole available area with “Content Mode” set to “Aspect Fit” and an activity indicator centred in the view behind it (This is all just a proof of concept so don’t be to harsh on me).
Now create two outlets for the image view and the activity indicator. Now back to the view controller. Here is the “final” code I decided to go with. I left some comments inside about things that should have worked but didn’t. I’ll explain:
As I noted in the comment for me the
interaction.intentHandlingStatus never became
.ready which I expected it to be after after returning the
.ready response in the intent handler that we have written before. My understanding of the
configureView method is that it will be called multiple times with different statuses so that we can update the view, but that does not seem to be the case. I would love to get some feedback on this if anyone knows more!
Because our example is so simple we can get by without it though. First we make sure the intent is what we think it is. Then we define a desired Size for us that is a little random right now, but something close to a square. We start the activity indicator and fetch the photo information. Then we fetch the photo and display it. We also stop and hide the activity indicator. That is it. Let’s take a look.
The last thing remaining is to handle the intent inside of the app as well. When the user taps on the UI they will be taken to the app, so our app needs to send them to the right place.
Forgive my pretty simplistic handling of this. What we’re doing is checking if this is the right intent and then if we’re not already on the screen showing the photo we navigate to it.
I’ve put all the code in the custom-intent branch in the same repo if you want to play around with it from there.
So, we’ve built our first fully app independent Siri Shortcut! Nice! Obviously it was a very simplistic one, one that could very easily just be done inside Workflow (ney … the Shortcuts app). But starting from here we can explore more complex shortcuts, adding ones with parameters for example. Or ones that actually have a practical use case 😂. Anyway I hope this helped and you enjoyed the ride.