Create simple Amazon Alexa Skill with backend on Java

Amazon Alexa is a voice or virtual assistant, which is available on standalone devices (Amazon Echo family of devices, FireTV, Fire Tablet, etc.), mobile applications and build-in to such devices as speakers and Smart Home controllers.

Alexa Skill is voice experience for the Amazon Alexa cloud-based voice service, which handle an interaction between user and an Alexa device and its services. Usually Alexa Skills use AWS Lambda functions or REST-API web services as their backend to provide logic behind the interaction.

This article briefly describes how to create such Alexa Skill and its backend on AWS Lambda function. Prerequisite - accounts, created in Amazon Developer Console and AWS Management Console.

Login and open the Developer Console, navigate to Alexa section

In the Your Alexa Consoles — select Skills

Hit the Create Skill button

Type a Skill name. For development environment the name can be any, but when publishing the Skill — the name should be unique within the selected language. Look at existing Alexa Skills in US, UK, Germany, etc.

Keep a Custom model as selected and hit the Create skill button. On the next page — keep the Start from scratch template selected and hit the Choose button.

On the Alexa Skill dashboard — navigate to the Invocation section, using menu on the left side. Type Skill Invocation Name — with this name users will refer to this skill when interacting with Alexa device.

Invocation name cannot contain the wake words “alexa”, “computer”, “amazon”, or “echo” (these are reserved Alexa device “wake-up” names), it also may only contain alphabetic characters, apostrophes, periods and spaces.

Hit the Save Model button before proceed further.

Navigate to the Intents section. Intents are templates of user interaction sentences — requests, questions, answers to Alexa’s re-prompts. Each intent should contain several examples of such sentences, which have common purpose. Alexa services would learn from these examples to understand — which intent to use when a user pronounce words and sentences. More examples — better recognition of which intent to use.

Intent name can only contain case-insensitive alphabetical characters and underscores, and it must begin and end with an alphabetic character. Numbers, spaces, or other special characters are not allowed.

AS an example — following utterances should be recognized as a part of WelcomeIntent (an intent can have any name within restrictions, mentioned above). When Alexa recognized the intent — it can respond with corresponding answer.

Intents also can contain slots — typed tags, where particular kind of words are expected within the sentence. More details in Create Intents, Utterances, and Slots

Predefined intents, such as AMAZON.FallbackIntent, AMAZON.CancelIntent, etc. are recognized from expected commands — please look at Standard built in intents guideline.

Hit the Save Model button before proceed.

Navigate to the Endpoint section. Select the AWS Lambda ARN as a Service Endpoint Type — Alexa Skill in this case would have its backend logic in an AWS Lambda function.

Your Skill ID contains an uniques identifier for this Skill. Copy it — it will be used when bind this Skill with its AWS Lambda function.

Hit the Save Model button. Now we are ready to create AWS Lambda function for this Skill.

Login and open the AWS Management Console. In Services — hit the Lambda link. Keep in ming in which region a function to be created. We use N.Virginia in this example.

Hit the Create function button to create a new function.

Select Author from scratch option, type a function name (use only letters, numbers, hyphens, or underscores with no spaces). For this example — select Java8 in the Runtime list. Leave the role-related as-is and hit the Create function button.

Copy-paste the code and hit the Save button.

On the function dashboard — hit the Alexa Skills Kit trigger, in the Designer section. A box Alexa Skills Kit will be added to the function.

This trigger needs configuration: when the Alexa Skills Kit box is selected — scroll down to the Configure triggers section and paste Your Skill ID, recently copied in the Alexa Skill dashboard (in the Endpoint section). Hit the Add button — to add the new trigger. Scroll to the top and hit the Save button — to save changes in the function.

Copy the ARN, located over the Save button — this is a unique identifier for the function.

Switch to the Alexa Skill dashboard, in the Endpoint section, and paste copied function ARN to the field Default Region. Hit the Save Endpoints button.

Now we can create Java code for the Lambda function. Here we will use IntelliJ IDEA with Gradle-plugin. Open the IDEA and hit Create new project.

Select Gradle project type, tick checkbox with Java and choose the SDK 1.8. Hit the Next button

Type your unique Group Id and give this application a name in the ArtifactId. Hit Next.

Keep the next step as is, hit Next

Give a name to the project and select its location. Hit the Finish button.

Add to the file build.gradle following dependencies and a task jar. lease look at documentation to learn more details about Gradle dependencies.

Shorter version of dependency description is to use semicolons.

In these dependencies:

  • junit is for unit-tests only
  • ask-sdk are components for Alexa Skills (which also contains dependencies for DynaboDb databse, AWS Lambda and Apache client support) and core logging component. However logging is missing some logging component, full-filled with the next dependency
  • aws-lambda-java-log4j2 contains log4j2 LambdaAppender, missed in ask-sdk dependencies. Please note that logging uses log4j2 version — log4j is not recommended

Build better voice apps. Get more articles & interviews from voice technology experts at

Enable Auto-Import turns on automatic download and setup of added or changed dependencies

Included dependencies (and their dependencies) can be revised in the Gradle view and in the project External Libraries section. It might be helpful, when extra dependencies are added manually, and they should have matching version to the existing ones

jar task provides copying of dependency libs to the build output jar-file. Without this — the jar-file will contain only Lambda component files, which is not sufficient for deployed AWS Lambda.

To build a project — click with the right mouse button one of tasks in the Gradle view, Tasks section, and hit Run — all tasks, from top to selected one, will be run one after another. E.g. run the jar task — tasks will be run

Copy Path or Reveal in Finder (or in Windows Explorer) context menu option to find the jar-file and ensure dependency libraries are included to the jar-file, look at its size — it should be several megabytes, or unarchive it. Modified file attribute will help later to ensure — this jar-file is build with lates changes, before uploading it with the Lambda dashboard

Now it is ready for creating classes to handle Alexa Skills requests. Optionally (and preferable) create classes within packages.

Create a package in the src/main/java folder. I use some unique name, composed from my GitHub account and names of the logic content

In this package — create following request handler classes:

  • CustomLaunchRequestHandler — it will handle cases, when a user made a request to an Alexa skill, but did not provide a specific intent.
package com.github.satr.ask.handlers;


import java.util.Optional;

import static;

public class CustomLaunchRequestHandler implements LaunchRequestHandler {
    public boolean canHandle(HandlerInput input, LaunchRequest launchRequest) {
        return input.matches(requestType(LaunchRequest.class));

    public Optional<Response> handle(HandlerInput input, LaunchRequest launchRequest) {
        return input.getResponseBuilder()
                .withSpeech("This is a simple Alexa Skill example")
  • WelcomeRequestHandler — it will handle requests, recognized as matching to the Alexa WelcomeIntent, created earlier
package com.github.satr.ask.handlers;


import java.util.Optional;

import static;

public class WelcomeRequestHandler implements RequestHandler {
    public boolean canHandle(HandlerInput handlerInput) {
        return handlerInput.matches(intentName("WelcomeIntent"));

    public Optional<Response> handle(HandlerInput handlerInput) {
        return handlerInput.getResponseBuilder()

And an Alexa Skill stream handler SimpleAlexaSkillStreamHandler, which uses just created handlers

package com.github.satr.ask.handlers;


public class SimpleAlexaSkillStreamHandler extends SkillStreamHandler {

    public SimpleAlexaSkillStreamHandler() {
                .addRequestHandler(new WelcomeRequestHandler())
                .addRequestHandler(new CustomLaunchRequestHandler())

Now we can build the project, make a jar-file (with running the jar task from the Gradle plugin, as described above)

Open the AWS Lambda dashboard to upload the jar-file to the Lambda with a Upload button. Also put to the Handler field a name of the stream handler class (including the package name), followed by two semicolons and a handle method — in this example it would be


Hi the Save button to perform jar-file upload.

Test the function. Select the Configure test events in the test dropdown

Choose the Amazon Alexa Start Session template, typing alexa in the search field

Give it a name and hit the Create button to create the test

Select the created test in the list an hit the Test button

Details of the Execution result contain respond of the function, in case of success, or issue description in case of an error. In this example it shows the respond from the CustomLaunchRequestHandler, as no any particular intent was recognised

<speak>This is a simple Alexa Skill example</speak>

Log output contains a log for this function call

This logs can be found for this and other calls. Click on logs to navigate to CloudWatch logs for this function. Click on the one of Log streams, with Last Event Time you ran function — one stream can contain logs for multiple function invocations

ClaudWatch log will has an error “No log4j2 configuration file found”— we will fix it shortly.

Let’s test the Alexa Skill with this new function. Open the Alexa Skill dashboard and open the Skill (you it is closed already)

Hi the Build Model button — build should be performed if the Skill has been changed (and saved). Skill Invocation Name is used to request this skill in the Alexa device or in the simulator

Navigate to the tab Test. Allow testing with selecting the Development option in the dropdown list

In the Alexa Simulator input field — type open simple example (it is a invocation name for this skill), hit Enter

If the progress sign (dots, moving within the bar) continue moving long time (many seconds) — probably an antivirus affects the connection. In this case it helps to turn off temporarily an antivirus or at least its Web Shield

Try to request the skill again . When it respond — turn on the antivirus (or its Web Shield) again, now interaction should work til next time the Test tab is re-opened. Keep the checkbox Skill I/O ticked (“on”) — with this option Skill I/O JSON Input and JSON Output text areas will contain request and respond json

Respond is the same as in test of the Lambda function above. Let’s try some sentences, which matched to the WelcomeIntent

ask simple example what can i do

JSON Input contains the request to the Skill, which has a recognized WelcomeIntent. It would has also slots, if the intent’s utterance contains it.

JSON Output contains the respond for this intent, created within WelcomeRequestHandler of the Lambda


During the development and using the Lambda functions it is useful to log some information. Currently this project has needed dependencies, but it missed a config file.

Add a file log4j2.xml to the folder src/main/resources with following content

<?xml version="1.0" encoding="UTF-8"?>
<Configuration packages="">
        <Lambda name="Lambda">
                <pattern>%d{yyyy-MM-dd HH:mm:ss} %X{AWSRequestId} %-5p %c{1}:%L - %m%n</pattern>
        <Root level="info">
            <AppenderRef ref="Lambda" />

This file contains settings of an appender, which can be changed if needed — please find details in this documentation. For example the log level in the Loggers/Root node can be changed to WARN, DEBUG, etc. , as well as message format in the PatternLayout/pattern.

Now this log can be used in the Lambda function classes. Example:

public class CustomLaunchRequestHandler implements LaunchRequestHandler {
    private static Logger logger = getLogger(CustomLaunchRequestHandler.class);

    public Optional<Response> handle(HandlerInput input, LaunchRequest launchRequest) {"Received unrecognized request: " + input.getRequestEnvelopeJson());
        return input.getResponseBuilder()
                .withSpeech("This is a simple Alexa Skill example")

Call of the Alexa Skill in the simulator or in the Lambda dashboard test includes this logged information

Alexa Skill is ready at minimum technical level. Now more meaningful Intents and Lambda handlers can be added to give it a real value.

Source code is published on GitHub

Video tutorial.

Amazon Web Services, and AWS Lambda are trademarks of, Inc. or its affiliates in the United States and/or other countries.

Voice Tech Podcast

Voice technology interviews & articles. Learn from the experts.

Sergey Smolnikov

Written by

Full-stack software engineer. Programming technologies and DYI electronics, robotics, 3D printing enthusiast.

Voice Tech Podcast

Voice technology interviews & articles. Learn from the experts.