Amazon’s Alexa meets DevOps (2/2)

David Mukiibi
The Andela Way
Published in
9 min readFeb 24, 2020
Photo by Vu M. Khuee on Unsplash with a twist

This is continuing from part 1 where I showed you how to create a new alexa skill from scratch

In this part I will take you through writing the code for the backend using the Alexa SDK and AWS boto3 Python SDK to query AWS S3 as hinted on earlier in part (1/2) here. Let’s begin.

Building the backend

The alexa skill we are creating is one that will create, delete and list a registered AWS user’s S3 buckets

To pull this off, we have to integrate alexa with AWS S3 through its well documented API.

To our advantage, the awesome engineers at AWS have created very user friendly APIs for all their services and coupled them with very easy-to-read, understand and use SDKs and documentation

As mentioned earlier, for this one we shall use AWS python SDK, boto3. Just like amazon’s AWS documentation states:

“Boto is the Amazon Web Services (AWS) SDK for Python. It enables Python developers to create, configure and manage AWS services, such as EC2 and S3. Boto provides an easy to use, object-oriented API, as well as low-level access to AWS services”

Talk is really cheap is you ask me; so let’s write some code.

Create a main.py file or give the file any name you wish and write the code below in it:

In the screenshot above, we import all necessary packages we shall require to build this backend as well as set up STS (Secure Token Service), an AWS security credentials service. That is line 16 to 27. Don’t mind line 18, I will show you where to get that when we are setting up AWS IAM permissions.

On line 55 we pick the bucket name to be created from a slot value and we assign it to a variable “bucket_name”.

This is the name of the s3 bucket you told alexa when you mentioned a phrase like “create kampala bucket” to Alexa

Line 57 to 63 we now create the S3 bucket with the help of AWS’ boto3 SDK and also set the “speak_output” variable with appropriate responses when certain checks (“if” statement on line 59) pass or fail.

Line 64 we handle errors incase any occur while we try to create the s3 buckets

Finally on line 69 we now end the execution of the “create bucket” class definition while at the same time we tell alexa service to build and deliver our response to the user as seen on lines 70 to 73.

For this alexa skill I have “created”, “listed”, “counted” and “deleted” S3 buckets

In the screenshot above is one of the classes that creates an S3 bucket. For the sake of the length of this article, I have not included the rest of the remainging code but all of it in its entirety can be found in this github repository.

I tried to write the code as simple as possible so that anyone can understand it even without comments. Let me know incase you need any further clarification on any part of this article or the code

AWS IAM permissions

Since the alexa skill is going to be hosted on AWS lambda and we are calling the s3 API, we have to grant permissions to AWS lambda to call the s3 API while it runs the skill backend

To achieve this, we make use of an assume role. This is what allows “trusted” users/services (lambda, ec2, etc) to assume a “role” for a given time period (like 1 hour, or 10 minutes, depending on what you set). Essentially it grants an entity all permissions it has for a given time.

We shall now create the assume role wih “create”, “list” and “delete” permissions scope

Head over here to create an assume role. You should see something like

Select “AWS Service” as the trusted entity type, then select “Lambda” as the service that will call AWS services on your behalf. What this means is essentially allowing this service to use this role.

With that selected, click “Next: Permissions” button. That will take you to this page to assign permissions to the role.

Select the “AmazonS3FullAccess” to grant all S3 access rights to this role. I know this is a little too much permissions.

You can add tags if you so wish by clicking the “Next: Tags” button to take you to the next page to add some. After that, click next to head to this page where you can now give your role a meaningful name and finally create the role.

And just like that, we have created a role. But we are not finished yet as we are yet to “tell” the role which entities/users are allowed to use it. Lets do that next.

Head over to your newly create role as shown below in the IAM section of your AWS console and click on it.

Role

You will then be taken to the next page where you can see your role in detail.

Edit Role Trust Relationships

Click on the “Trust Relations” tab to edit these and add the entities that will use this role. For our case, the entity that will make use of this role is the Alexa hosted service lambda function. We therefore need its Amazon Resource Name (ARN) to add it among the trusted relationships of this newly created role.

Switch over to your alexa developer console and under the “code” tab, click the icon as shown below.

Assume Role ARN

This will display the ARN for your skill as shown below.

AWS Lambda Execution Role ARN

Copy this and the go back to the AWS console and add the copied ARN to the assume role trust relationships and your new trust relationship configuration json file should look like so. (don’t mind line 15)

In the default trust relationship, we have added lines 12 to 20 to allow the lambda function hosting our alexa skill to assume this role on line 16.

With that done, we are now good to go. Let’s take this skill for a spin. Shall we?

Go back to the alexa developer console here and then go to the “test” tab. You can enable the microphone and speak the commands or simply type them in using your computer keyboard. Which I will do for the sake of this article.

Test The Skill

Invoke the alexa skill with the same invocation name you created at the beginning. The invocation name for this skill is “devops cloud assistant”.

This will trigger the “LaunchRequest” intent and subsequently the intent handler and alexa will speak the phrase “Welcome back sir. how may I help you today?” as I set it through the “speech_text” variable.

Skill Invocation

Go ahead and speak/type more command words/phrases similar to those in your “utterances”. These have to be identical phrases otherwise a fallback intent will be initiated and that does us no good.

It’s worth noting that at the end of the alexa response to your request, she remains open to receive further commands/utterances for 30 seconds after which you will have to invoke the skill again from the top using the invocation name

Skill Invocation

And just like that, we have created an S3 bucket on amazon AWS using amazon Alexa.

It is worth noting that all these voice commands also work on amazon speaker enabled devices like the echo or echo dot

Go ahead and try other “commands” to delete or list your S3 buckets. To confirm your action you can go to the AWS S3 page and check for your created/deleted buckets.

All the code for this is pushed to this github repository for your reference.

The code is not perfect but it works, and as our forefathers once said, if it works, don’t touch it. So whoami to touch it?

Photo by Andrew Seaman on Unsplash

A few bottlenecks to look out for:

  1. You must fulfill the requirements for all the platforms you are “glueing” together to avoid development headaches. Such requirements in this project included AWS IAM permissions to allow the alexa skill access to the AWS S3 API. Essentially authentication.
  2. Alexa skill builder slot types for example are limited to a given set of types. like numbers, movie titles, city names, etc and some are restricted/limited to regions. For instance there are slot types one can access/use during development while in the US and can not access/use while in Kenya. By this I mean, when you type in the search bar of the slot types as you select which slot type to use, you never see the one you are looking for if you are in a region where it is restricted. It is worth noting though that you can create your own slot types, for instance is you want them to be specific to say, african surnames or village clan chiefs, etc.
  3. S3 bucket naming convention. This is not an issue up until when you are creating such a project where the slot type is fine but that name you want to use, which matches the alexa slot type is not accepted by S3 because it is not unique to the whole S3 universe.

All these requirements must align like the orion’s belt to pull off a project of the sort seamlessly. The slot type you choose determines what bucket names you can create and those must align with the s3 buckets naming convention

This project is just a tip of the iceberg in terms of the code quality, solution creativity, and alexa service “best practice” adherence. It is a showcase of the possibilities and capabilities Alexa can bring to the table even for us Engineers in the software delivery realm.

Feel free to fork this repository and make your own additions to the code, or use the idea as an inspiration to do a lot more with alexa than is currently available to automate and simplify your life.

Imagine telling alexa to run your unit tests and report back with what percentage they passed, all while you watch a movie in your living room, or commuting to office while you drive, or telling alexa to build a docker image for you for a given micro service while you focus on something else. Just imagine…

Let me know of your works on this topic on twitter, linkedin or here in the comments. I’d love to borrow some ideas myself or even collaborate on a project with you.

To Endless Possibilities. Cheers!

Cheers

--

--