Alexa Presentation Language

Stuart Pocklington
17 min readDec 22, 2018

--

Step by step — the basics to get you started!

Find me @alexa_dev_uk

Introduction

Recently Amazon announced its new multi-modal development language called Alexa Presentation Language (APL), and it is clear that they want to push the capabilities of Alexa devices that have screens by giving developers more control over the screen interface (UI) by ditching the fixed templates of old, and by allowing developers to have full license to do whatever they want. This guide steps through the basics on how to get started with APL, and I try to keep it simple as I can remember how daunting this was to me the first time I tried it. Some of you will no doubt already know some of the early steps, if so feel free to skim read and jump ahead, but I am guessing most of you reading this will appreciate the fact that I presume no prior knowledge on your behalf, and the detail that I have gone into, right from the get go.

Step 1 — Setup ASK CLI

(Alexa Skill Kit Command Line Interface)

For this guide we’ll be setting up our skill using the ASK CLI. If you don’t know what ASK CLI is, don’t worry. Below is a brief description of what ASK CLI is:

“The ASK Command Line Interface (ASK CLI) is a tool for you to manage your Alexa skills and related AWS Lambda functions. With ASK CLI, you have access to the Skill Management API, which allows you to manage Alexa skills programmatically from the command line”

You can find the getting started guide from Amazon here, but I find watching the excellent getting started video on skilltemplate.com is a great way to getting yourself setup and ready to start using ASK CLI. You can find that here. Allow 20–30 minutes to get this setup the first time you do it.

Pause what you are doing, get ASK CLI setup, and then you’re ready to follow the rest of this guide, if you get stuck and need help, get in touch!

Step 2 — Choose an IDE

Wait, what’s an IDE? Great question, I think it stands for Integrated Development Environment, but if I am honest i’m not 100% sure, and I can’t be bothered to check, but in simple words that I can understand, an IDE is an application that lets you edit your code. I use Visual Studio code which is free to download, and I will be referring to it in this guide, but there are others out there such as Sublime, so have a google search if you want to explore the options.

Below is an image of Visual studio code, which is split into three parts:

  1. File Explorer — List all the files in the project directory that you are working in.
  2. Code editor — This is where you write the code that will make stuff happen.
  3. Terminal — `This is where you type magic commands that makes your computer do great things, and which makes you feel like a ‘Boss’ when it all works.

If you can’t see the terminal, you can get it to show up manually by finding the options in the menu.

NB: If you’re a Mac user you can use the built in OS terminal if you wish.

Step 3 — Every project needs a home!

Nearly there, now we just have to navigate to the location of your PC where you want to create your skill. There are a few commands that you can use in the terminal that might help:

  1. dir — short for directory, type this into terminal and hit return and it will list all of the files in your current location.
  2. mkdir — short for make directory, this creates new folders. Use this syntax mkdir foldername . Hit return and the folder will be created.
  3. cd — this stands for change directory , and you can use this to navigate between the folders on your PC.
  4. cd.. will take you backwards in your file paths hierarchy
  5. cd foldername — will move you to that folder location
  6. cd \ takes to the root of the drive that you are on i.e. c:\ this can be useful, and in fact let’s use it as a little test for you.

Exercise 1 (skip if this is teaching you how to suck eggs!)

  1. In your terminal window type cd \ and press enter. This will take you to your root directory.
  2. No use the dir and cd command listed above to navigate to the location where you want to save your Alexa skill
  3. Use mkdir to create the folder to save your skill in
  4. Use the cd command to move into that folder

If you look at the image above, I would have followed the following steps to get to my project folder:

  1. cd \
  2. dir (to have a nosey at what other locations I can move to)
  3. cd Users
  4. cd punkp
  5. cd Desktop
  6. mkdir alexaSkills
  7. cd alexaSkills

If you have managed to do this, then give yourself a pat on the back, you’re on a roll.

Please note my ‘deliberate’ typo in the below image, which gave me an error message. I thought that unlike other how to guides I would leave this in as its real, and proves that shit happens, but it’s not the end of the world!

Once you are happy with your project location, in the terminal window type — ask new and hit enter. Wait a few seconds and then you will be asked to enter your skill name. Type in whatever you want your skill to be called and then hit enter.

The skill will take a few moments to initialise, and when its done you will see something like the below image:

Okay, good progress so far, your barebones skill is ready for you to start tinkering with, but before you do take note of the file explorer in your IDE. Just because the terminal is pointing at the right place doesn’t mean the file explorer is. In Visual studio code, I click on File > Open Folder > and then select the folder I have just created in Terminal to make sure everything is just right.

There is probably some ingenious way to do that last step in the Terminal, but I haven’t figured it out yet, and it doesn’t seem that big of an issue for me to invest time into.

Step 4 — Let’s take a look at the code, and start tweaking it

Let’s start by looking at the file explorer

The things we are interested in here are the Lambda & models folder, and the skill,json file.

Ignore the .ask folder, that just contains a config file, and to the best of my knowledge you don’t ever need to touch it.

skill.json

If you have used the Alexa developer console to publish a skill before, the skill.json file should look familiar and actually make a bit of sense. I like to think of this as the page where you fill in the details about the skill, sample invocation phrases etc. Just go through and edit this accordingly, and when finished save it. When you deploy your skill, this file will set the values in the developer console for you.

You might have noticed that the category value isn’t an exact match for what is listed in the developer console. You can either leave that as it is and change it in the developer console later on, or you can find a list of all of the category values to use in CLI here, scroll down to about half way down the page.

Whilst still looking at the skill.json file, because we want to use APL, we need to add a few lines of code that will make it work.

Towards the end of the code you will see a section for ‘apis’, change that to look like the below image, and save the file. We are ready to move on.

Models

Now open the en-US.json file in the models folder. I refer to this as the intent model.

By default the invocation name will be set to ‘greeter’, you can leave this as it is, or change it as you feel fit, just remember that this has to be lower case. This is what you ask Alexa to open to start your skill.

Notice that there are a few default amazon intents in here, and one custom intent called HelloWorldIntent (line 28). This intent has some sample utterances listed, when testing your skill saying any of those utterances will trigger this intent.

There is also a space to add slot values, but for our skill we won’t need them. If you would like a follow up guide on adding slot values, let me know!

If you ever want to add additional intents, this is where you will add them.

If you changed the invocation name, save your file, and get ready to look at the index.js file.

Lambda

Back to the file explorer, expand the lamdba folder, and open the index.js file.

This is the beating heart of your skill, or maybe it’s the brain(or both), as when an intent is triggered, this is the code that will tell Alexa what should happen.

Line 6 has the LaunchRequestHandler, this is what will be triggered when the skill first launches.

Line 21 is the start of the HelloWorldIntentHandler, later on we will add a little bit of code here to enable an APL screen.

Wait a minute, what is a handler? I hear you ask!

Okay, let me try to explain that in the way that makes sense to me. Previously we talked about intents in the en-US.json file, and how sample utterances can be used to trigger intents. I like to think of the Handler as a bodyguard or minder of the intent, so that when the intent is called, it checks that this is indeed the intent that should be used, and then acts like a go between for Alexa and the lambda code to make sure everything happens as it should.

If you look at line 23 & 24, this is where the code checks if it is an Intent request (i.e. not a launch request), and that the intent name being asked for is the HelloWorldIntent. If it is (is true) then the code will be run. If it isn’t then some other intent code should be run, or maybe the error message should be triggered.

If you ever want to add more intents, after you have added it to the en-US.json file, you can copy the HelloWorldIntentHandler, paste it below, and then tweak it to match the new intent details.

If you do add new intents, there is a real important step you need to do to ensure it works.

Take a look at line 93, this is the skill builder, or the blueprint of the skill. Lines 97 to 101 lists 11 of the intents that the skill uses. You must add the name or any new intent handler you add here, otherwise when testing your skill you will get an error when trying to trigger your new intent. It took me far longer than it should to figure this out, and I had many head scratching moments until I did.

Right, back to the HelloWorldIntentHandler. For the purposes of this guide, we don’t have to change anything here, but let me highlight a few parts just in case you decide you want to make changes yourself.

Line 27 is a variable that contains the text that Alexa will speak. Change that to whatever you want.

Line 31 adds a card that will show up in the Alexa app. In this example the card will display ‘Hello World’, and the text from the variable on line 27. Top tip, never allow the card to include code, otherwise the skill will be rejected. An example of how code might find its way in is if you include SSML within the variable on line 27.

The way this intent is setup, the skill will close as soon as Alexa has finished speaking ‘Hello World’. If you want to keep the skill open so the user has an opportunity to respond (and trigger another intent), add .reprompt(‘with some text here’) after line 30.

We are going to come back to the lambda code in a little bit, but for now let’s move onto the bit you have really come here to see APL (in all its glory).

Step 5 — APL (Let’s do this)

Finally we can start to look at APL and how to design a screen layout.

Head over to here to open up the APL editor.

Choose start from scratch

This will load up the APL code editor.

Toggle the switch highlighted below

This changes the view, and I find it much easier to edit the code this way.

The first thing worth doing is adding a bit of code that allows APL to know what type of screen the end user using (Echo Show/Spot/Tablet/TV). Add the below to the import section.

Now let’s add some content into the mainframe items section. We need to add a frame item that will effectively hold all other information that we will add, then we will add further items within the frame that will hold the content. I will share the full code so you can copy and paste it all in one go, or you can try and follow each step below.

Let’s setup the frame as per the image below. The height is set to 100vh, and the width is 100vw.

vh = view width, or the size of the screen, i.e. 100vh on a screen with 1000 pixels in height will be 1000 pixels, 50vh is 500 pixels, etc.

vw = view width, the same as view height but its a measurement of the width of the screen.

The background colour (color to those outside of the UK), is set to #58ACFA, which is a light blue colour. Here is a great website that can get you the hex code for a colour of your choice.

Now let’s add the first item within the frame as per the image below.

Notice that the first thing in the item is “when”: “${@viewportProfile == @hubRoundSmall}”. This is telling the APL code to only apply to the Echo Spot and other similar devices (if there are any). You have to code for small round screens, and rectangle screens separately to ensure that your content looks okay on each different type of display. You could decide not to include this “when” clause and only code for rectangle screens, but your skill will be rejected if you do.

Next we tell the item that it is a container, this is similar to a frame as it will hold content within it. If you have ever created a webpage in HTML think of it as a <div> tag. The height and the width is set, then we add another item within this container.

Now an image is added. Notice that the “type” is set to Image, the height and the width of the image is set. The position is set the absolute, the Left and Right value tells the image how far away it should be from the left and right hand side of the screen, and Top tells the image how far to be from the top of the screen. Play around with these values to move your content around, just make sure you are previewing the small screen layout to see the changes that you are making.

You should now be seeing something similar to this:

The next item we will put in is some text. This time the type is set to Text, we tell it what text to display, position it similar the the image, then add fontWeight, a text alignment value, and font size.

This is incredibly similar to CSS code that you use to style web pages. For example in CSS you set the font size using font-size, in APL you use fontSize instead. So it appears you just drop the ‘-’ and use camel case text and it will work. There are millions of pages of the web explaining what you can do with CSS, and I recommend you read up on some of it and experiment with what works in APL.

Now that the design for the small screen is done, it’s time to do the same for rectangle screens.

Here I am applying the same design to all the design to all rectangular screens, but if you are feeling really adventurous you could give each one its own design.

The code will give a design like the below image

This is has the same content as the previous design, but I have added a third item with some text in it to make use of the bigger screen size. You can add as many items as you wish.

There is more that you can do with APL, but for this guide I just want to keep it simple. If you enjoy following this guide let me know and I may do some more advanced ones.

Below is the full code for you to copy and paste. Scroll down to see the next step that brings this code into your lambda code.

{
“type”: “APL”,
“version”: “1.0”,
“theme”: “dark”,
“import”: [
{
“name”: “alexa-viewport-profiles”,
“version”: “1.0.0”
}
],
“resources”: [],
“styles”: {},
“layouts”: {},
“mainTemplate”: {
“items”: [
{
“type”: “Frame”,
“height”: “100vh”,
“width”: “100vw”,
“backgroundColor”: “#58ACFA”,
“items”: [
{
“when”: “${@viewportProfile == @hubRoundSmall}”,
“type”: “Container”,
“height”: “100vh”,
“width”: “100vw”,
“items”: [
{
“type”: “Image”,
“source”: “https://dl.dropbox.com/s/ws3p9ckm2voekdl/helloWorld.png?dl=0”,
“height”: “90vh”,
“width”: “100vw”,
“position”: “absolute”,
“left”: “3vw”,
“right”: “3vw”,
“top”: “10vh”
},
{
“type”: “Text”,
“text”: “Hello World!”,
“position”: “absolute”,
“left”: “12vw”,
“right”: “12vw”,
“top”: “20vh”,
“fontWeight”: “900”,
“textAlign”: “center”,
“fontSize”: “10vw”
}
]
},
{
“when”: “${@viewportProfile == @hubLandscapeMedium || @viewportProfile == @hubLandscapeLarge || @viewportProfile == @tvLandscapeXLarge}”,
“type”: “Container”,
“height”: “100vh”,
“width”: “100vw”,
“items”: [
{
“type”: “Text”,
“text”: “Hello World!”,
“position”: “absolute”,
“left”: “5vh”,
“top”: “2vh”,
“fontWeight”: “900”,
“fontSize”: “6vw”
},
{
“type”: “Image”,
“source”: “https://dl.dropbox.com/s/ws3p9ckm2voekdl/helloWorld.png?dl=0”,
“height”: “90vh”,
“width”: “100vw”,
“position”: “absolute”,
“left”: “35vh”,
“top”: “12vh”
},
{
“type”: “Text”,
“text”: “Here is some text used just on larger screens and there is more room to play with!”,
“position”: “absolute”,
“left”: “5vh”,
“right”: “55vw”,
“top”: “15vh”,
“fontSize”: “5vw”
}
]
}
]
}
]
}

Step 6 — Bringing it all together

Nearly there, well done if you have made it this far.

Now we just need to add that code into our skill, and tell our lambda code that it should reference the code.

In your IDE, in the file explorer select the lambda > custom folder and add a new file called hello.json.

Open up the hello.json file, and paste in the code from the APL designer from the previous step.

Save it, and then move back to the index.js file.

In the HelloWorldIntentHandler, we need to add a directive (see below). This tells the skill to display something on a screen for this intent, and it tells it the location of the file (hello.json) so it know what content to display.

That’s it you’re done! Just kidding, although you could upload your skill now and it would work fine on an alexa device with a screen, what about #voicefirst and for users without a screen that might want to try out your skill. We need to tweak the code to accommodate this.

The below code is a function within the index.js file that checks what type of device the skill has launched on. Type this into your index.js file, somewhere out of the way (I normally put functions like this after my last intent handler. But before the skill builder.

We now just need to change the HelloWorldIntententHandler so if has an IF-ELSE statement.

Here is how the logic works.

It calls the function we just added to work out if the device has a screen or not, then:

IF device has a screen, use APL

Else (device doesn’t have a screen), do something else.

If you review the code in the image below you will see that the If statement is circled in red, and the else statement is circled in blue.

Change your code to match that and save everything.

Step 7 — Deploy your skill

Now everything is ready to be deployed. In the terminal, type in ask deploy, hit enter, and wait while everything gets uploaded to your Alexa developer and AWS accounts.

Top tip: if later on you just want to update the lambda code, which means the intent model does not need rebuilding, type ask deploy -t lambda

When it has finished uploaded you will see the below messages in your terminal.

If you go to your AWS console, and go to your lambda functions, you will see a new one for this skill has been added, and if you go to the Alexa Developer console, you will see this skill listed there also.

Click to edit your skill, then when it has loaded up, click on interfaces, and then toggle Alexa Presentation Language on.

Now click on save interfaces, and build the model.

When the model has finished building, test your skill and see what it looks like. You can test it on physical devices also.

That’s it (Phew!) your first APL skill is done and ready to be certified.

I hope you have enjoyed using this guide, let me know your feedback, and share your APL skills with me.

--

--