How to create an Alexa skill

Create custom Alexa skill for Ice cream shop (creamy heaven)

Lokesh Ballenahalli
Voice Tech Podcast
9 min readNov 28, 2019

--

Amazon Alexa is a virtual assistant developed by Amazon (the brain behind millions of echo devices)​, know more about Alexa in What is Alexa and Alexa Skill

While developing a skill to Alexa, you can design your own custom model or pre-build models; In this post, we are building a custom skill called Creamy Heaven (Ice cream shop skill)

Let’s first see the dialogue

User: open creamy heaven
Alexa: Welcome, you can select from stick or scoop. Which would you like to try?

User: scoop
Alexa: Nice, which flavour do you like in scoop ice cream’

User: I would like to eat strawberry today
Alexa: you selected strawberry

To build a skill to the above dialogue, we should know, how Alexa works and what is Invocation name, Intents, Utterances and Slots

How Alexa skill works

When a user speaks (Alexa, open creamy heaven) to Alexa device, the speech is sent to Alexa service where Text to Speech, Speech recognition, Machine Learning, Natural language understanding happens to determine what the user wants, once identified it sends a JSON request to the skill (creamy heaven) which is hosted in Lambda; Once the logic you wrote inside the skill is executed, a JSON response is sent to Alexa service which converts Text to Speech and sent to Alexa device, if the companion Alexa app is used a visual component is displayed on app

Now that we know how the skill works, let's see what Invocation name, Intents, Utterances and Slots is

Invocation name:

Name of the skill, what user should say to invoke the skill
ex: Creamy heaven

Intents:

User intention to ask something or do something (action)
ex: Scoop, Stick

Utterances:

Set of phrases user uses when making a request
ex: Alexa, ask creamy heaven for sticks
Alexa, ask creamy heaven for flavours in sticks

Slots:

A representative list of possible values
ex: mango, strawberry, caramel, vanilla

let's start creating the skill

  1. Go to developer.amazon.com/alexa/console/ask

2. Enter credentials and click on Sign in, if you don't have an account, create one.

Developer console will display the skills if any.

3. click create Skill

4. Enter Skill name as creamy heaven, select Default language as English (US) (you can change the language later), by default Custom model will be selected, leave as it is and click on Create Skill

5. choose Start from scratch template

Interaction model: section to create the interaction model

  • Utterances Conflicts: shows a list of utterances that map to more than one intent
  • Invocation: Skill Invocation Name
  • Intents: Add and edit intents, Built-in Intents are intents which are pre-built for you along with the utterances. No need to supply sample utterances for these Intents.
  • Slot Types: Create slot types, built-in slot types which are pre-built for you
  • JSON Editor: Skill Interaction model in JSON editor

Interface: Provides additional directives and specific additional features.

Endpoint: endpoint link to which the Alexa sends a request when user invokes the skill. In our case, we will be hosting in Lambda and will provide the AWS Lambda ARN

Let's go ahead and Intents

Intents:

User intention to ask something or do something (action)
ex: Scoop, Stick

Once invocation name (Alexa, open creamy heaven) is called, default LaunchRequesthandler is triggered which greets the user with “Welcome, you can select from scoop or stick. Which would you like to try?

To build creamy heaven skill we need three more intents:

  • ScoopIntent: To call this intent user can use sentences(utterances) such as “I would like to have a scoop”, “scoop please”,” I will take scoop today”
  • StickIntent: To call this intent user can use sentences(utterances) such as “stick please”, “I like a stick”,” I would like to have stick”
  • ScoopFlavoursIntent: To call this intent user can use sentences(utterances) such as “I like {scoopslot}”, “{scoopslot} please”,” {scoopslot}”.

where {scoopslot} is a Slot name.
{scoopslot} is mapped to Slot Type(scoopslotType) which contains list of possible values such as mango, strawberry, caramel, vanilla

6. Click on add Intents and add ScoopIntent

7. let's add utterances for Scoopintent

8. let's create StickIntent and add utterances

9. Create scoopFlavoursIntent, to add slot to the utterances press on open flower bracket, a pop up will open and asks for slot name, enter the slot name-scoopslot

10. Fill all the utterances

where {scoopslot} is a Slot name.
{scoopslot} is mapped to Slot Type(scoopslotType) which holds a list of possible values such as mango, strawberry, caramel, vanilla

let's create scoopslotType which holds a list of possible values such as mango, strawberry, caramel, vanilla

11. Add Slot Types

12. List all the values for scoopslotType

13. Save the model and link scoopslot to scoopslotType

14. Click on Build Model, you should get Full Build successful message

Now that we have trained the model, let's go ahead and build the necessary endpoint. we will be using Visual studio code as editor and ASK SDK for Node.js. You need to have an AWS account, create one if not (free tier is enough).

Build better voice apps. Get more articles & interviews from voice technology experts at voicetechpodcast.com

watch this video and link to download and configure Alexa Skills Kit Command-Line Interface.

Go to the terminal in visual studio code, create a folder and type command ask new

select node.js as runtime, a list of templates will be shown; choose Hello world template and enter the skill name as creamy-heaven

15. Expand the project folder, go to creamy-heaven -> lambda -> custom -> index.js

Once invocation name (Alexa, open creamy heaven) is called, default LaunchRequesthandler is triggered, lets replace the speachOutput inside LaunchRequesthandler to “Welcome, you can select from scoop or stick. Which would you like to try?

Let's remove HelloWorldIntentHandler and add scoopIntentHandler, stickIntentHandler, scoopFlavoursIntentHandler and add newly added handlers to Alexa.SkillBuilders

// This sample demonstrates handling intents from an Alexa skill using the Alexa Skills Kit SDK (v2).
// Please visit https://alexa.design/cookbook for additional examples on implementing slots, dialog management,
// session persistence, api calls, and more.
const Alexa = require('ask-sdk-core');
const LaunchRequestHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'LaunchRequest';
},
handle(handlerInput) {
const speakOutput = 'Welcome, you can select from scoop or stick. Which would you like to try?';
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt(speakOutput)
.getResponse();
}
};
const scoopIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest' &&
Alexa.getIntentName(handlerInput.requestEnvelope) === 'scoopIntent';
},
handle(handlerInput) {
const speakOutput = 'Nice, which flavour do you like in scoop ice cream';
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt('please say, which flavour do you like in scoop ice cream')
.getResponse();
}
};
const stickIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest' &&
Alexa.getIntentName(handlerInput.requestEnvelope) === 'stickIntent';
},
handle(handlerInput) {
const speakOutput = 'Nice, which flavour do you like in stick ice cream';
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt('please say, which flavour do you like in scoop ice cream')
.getResponse();
}
};
const scoopFlavoursIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest' &&
Alexa.getIntentName(handlerInput.requestEnvelope) === 'scoopFlavoursIntent';
},
handle(handlerInput) {
const speakOutput = 'You selected ' + handlerInput.requestEnvelope.request.intent.slots.scoopslot.value;
return handlerInput.responseBuilder
.speak(speakOutput)
// .reprompt('add a reprompt if you want to keep the session open for the user to respond')
.getResponse();
}
};
const HelpIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest' &&
Alexa.getIntentName(handlerInput.requestEnvelope) === 'AMAZON.HelpIntent';
},
handle(handlerInput) {
const speakOutput = 'You can say hello to me! How can I help?';
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt(speakOutput)
.getResponse();
}
};
const CancelAndStopIntentHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest' &&
(Alexa.getIntentName(handlerInput.requestEnvelope) === 'AMAZON.CancelIntent' ||
Alexa.getIntentName(handlerInput.requestEnvelope) === 'AMAZON.StopIntent');
},
handle(handlerInput) {
const speakOutput = 'Goodbye!';
return handlerInput.responseBuilder
.speak(speakOutput)
.getResponse();
}
};
const SessionEndedRequestHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'SessionEndedRequest';
},
handle(handlerInput) {
// Any cleanup logic goes here.
return handlerInput.responseBuilder.getResponse();
}
};
// The intent reflector is used for interaction model testing and debugging.
// It will simply repeat the intent the user said. You can create custom handlers
// for your intents by defining them above, then also adding them to the request
// handler chain below.
const IntentReflectorHandler = {
canHandle(handlerInput) {
return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest';
},
handle(handlerInput) {
const intentName = Alexa.getIntentName(handlerInput.requestEnvelope);
const speakOutput = `You just triggered ${intentName}`;
return handlerInput.responseBuilder
.speak(speakOutput)
//.reprompt('add a reprompt if you want to keep the session open for the user to respond')
.getResponse();
}
};
// Generic error handling to capture any syntax or routing errors. If you receive an error
// stating the request handler chain is not found, you have not implemented a handler for
// the intent being invoked or included it in the skill builder below.
const ErrorHandler = {
canHandle() {
return true;
},
handle(handlerInput, error) {
console.log(`~~~~ Error handled: ${error.stack}`);
const speakOutput = `Sorry, I had trouble doing what you asked. Please try again.`;
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt(speakOutput)
.getResponse();
}
};
// The SkillBuilder acts as the entry point for your skill, routing all request and response
// payloads to the handlers above. Make sure any new handlers or interceptors you've
// defined are included below. The order matters - they're processed top to bottom.
exports.handler = Alexa.SkillBuilders.custom()
.addRequestHandlers(
LaunchRequestHandler,
scoopIntentHandler,
stickIntentHandler,
scoopFlavoursIntentHandler,
HelpIntentHandler,
CancelAndStopIntentHandler,
SessionEndedRequestHandler,
IntentReflectorHandler, // make sure IntentReflectorHandler is last so it doesn't override your custom intent handlers
)
.addErrorHandlers(
ErrorHandler,
)
.lambda();

{scoopslot} is mapped to Slot Type(scoopslotType) which contains a list of possible values such as mango, strawberry, caramel, vanilla

scoopFlavoursIntentHandler accepts a scoopslot value from the user


const speakOutput = 'You selected ' + handlerInput.requestEnvelope.request.intent.slots.scoopslot.value;

let’s zip index.js, package.json and node_modules folder.

16. To create Lambda function Sign in to Amazon Web Services, under service select lambda

According to Wikipedia,
AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of the Amazon Web Services. It is a computing service that runs code in response to events and automatically manages the computing resources required by that code

17. Create new Lambda function with name creamyheaven and select the runtime to Node.js 8.10

18. Add Alexa skill kit as a trigger to Lambda function

19. Copy the Skill ID by going to Alexa developer console -> Endpoint -> AWS Lambda ARN

20. Paste the copied Skill ID to add trigger and select Skill ID verification to Enable, click on Add

21. In the function code section, select code entry type as Upload a .zip file, Runtime as Node.js 8.10 and upload the zip folder

22. Copy the ARN from Lambda page and paste in Alexa developer console

23. Click on save endpoint.

Its time to test the skill, switch to Test tab in Alexa developer console, make skill testing is enabled to development

that's it, we have successfully created Creamy heaven skill

Let me know what your building, have fun!!

This post is a part of ‘Build voice apps for Amazon Alexa and Google action’ series by VoiceTechPrism : https://voicetechprism.com/

Something just for you

--

--