Implementing Alexa skill using Quarkus native

mirko bonasorte
7 min readJan 14, 2020

--

In a previous article we have talked about how to implement an Alexa skill using AWS Lambda and Quarkus non-native. In this article we will migrate our skill to Quarkus native.

Why

There are a couple of reasons why you might wish to implement your skill using Quarkus native:

  1. AWS Lambdas cold start is slightly faster in native mode. More in general, AOT runs faster than JIT
  2. When using AOT, the memory footprint is lower, too. In our tests, though, we could not find evidence of that, as AWS Lambdas sizing does not allow to choose memory and CPU separately: normally, you get far more memory than you need just to get more CPU and compensate the cold start

Issues

While it is true that Quarkus did a great work trying to make the passage to AOT as straightforward possible, there are some issues to deal with:

  1. Quarkus has no explicit support for Alexa skills. We tackled that in the previous article, but going native requires to move some steps further
  2. AOT compilation requires explicit configuration for reflection: especially for JSON serialization, Alexa SDK uses Jackson library, which makes an extensive usage of reflection and annotation metadata
  3. SSL is not natively supported: when creating a normal Quarkus container, you benefit from having the whole GraalVM and your executable within the same environment, including the necessary dynamic libraries and trusted root certificate file to make SSL possible.
    By contrast, when creating an AWS Lambda, all you keep is your executable: as a result, any attempt to interact with HTTPS resources, which are the most one in the AWS ecosystem, you definitely fail

Support for Alexa skill

In the previous article we circumvented Quarkus checks by manually specifying our Alexa skill as the handler at deployment and defining a fake handler in quarkus.lambda.handler Quarkus property.

In case of native mode, all you have is an executable that is responsible for everything, including the evaluation and execution of a hypotetical handler. That executable is generated by Quarkus and can not be easily hacked (or at least we could not find a way to do that): basically, during the compilation, Quarkus generates a main class that is our entry point, which invokes the main lambda loop (io.quarkus.amazon.lambda.runtime.AmazonLambdaRecorder.startPollLoop). That loop is based on the handler that must be configured in the quarkus.lambda.handler Quarkus property. So this time we have to stick with Quarkus rules.

Rather than avoiding a definition of a handler, we can define a handler that delegates all the actions to the Alexa stream handler, as follows:

package io.mirko.lambda;

import com.amazon.ask.SkillStreamHandler;
import com.amazon.ask.Skills;
import com.amazon.ask.dispatcher.request.handler.HandlerInput;
import com.amazon.ask.request.interceptor.GenericRequestInterceptor;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;

import javax.enterprise.inject.spi.BeanManager;
import javax.enterprise.inject.spi.CDI;
import javax.inject.Named;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.lang.reflect.Type;
import java.util.Map;
import java.util.stream.Stream;

@Named("swear")
public class QuarkusDelegateStreamLambda implements RequestHandler<Map, Map> {
private static final ObjectMapper JSON_OBJECT_MAPPER = new ObjectMapper();

private final SkillStreamHandler delegate = ...;
private InputStream streamFromRequest(Map request) {
try {
return new ByteArrayInputStream(
JSON_OBJECT_MAPPER.writerFor(Map.class)
.writeValueAsBytes(request)
);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}

@Override
public Map handleRequest(Map request, Context context) {
final ByteArrayOutputStream os =
new ByteArrayOutputStream();
try {
delegate.handleRequest(streamFromRequest(request), os, context);
return JSON_OBJECT_MAPPER.readerFor(Map.class)
.readValue(os.toByteArray());
} catch (IOException e) {
throw new RuntimeException(e);
} catch(Exception e) {
e.printStackTrace();
throw new RuntimeException(e);
}
}
}

As you might see from this example, we have no interest in mapping the request and the returned response to a specific type, as that is SkillStreamHandler’s job. Unfortunately, though, here we have a double de/serialization, as using byte[] instead of Map is not legal: as a result, we must let Quarkus deserialize the request into a Map, then we have to serialize it again to submit it to the SkillStreamHandler. The response that is provided by the SkillStreamHandler is in bytes, but we have to deserialize it into a Map that Quarkus will eventually serialize again in bytes in order to provide the response. Maybe, one day Quarkus will implement a specific support for Alexa skills and this overhead will not be necessary any more.

Well, now we have our “handler” that both makes Quarkus happy and is compatible with Alexa SDK.

Configuration for reflection

As above mentioned, in order to use AOT, we have to configure the build phase to support reflection, as many key parts of Alexa SDK are based on it (mostly JSON serialization). If you don’t, you might experience runtime crashes from missing fields / data, e.g. when receiving requests or returning responses: since requests and responses are generated using Jackson with an annotation based declarative approach, any class that might be part of a request or a response must be explicitly included in the build, even if there is no direct reference to their usage in the code.

All we have to do is to instruct the GraalVM compilator to include the necessary configuration files, as follows:

<profiles>
<profile>
<build>
<plugins>
<plugin>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-maven-plugin</artifactId>
<version>${quarkus.version}</version>
<executions>
<execution>
<configuration>
<enableHttpUrlHandler>true</enableHttpUrlHandler>
<enableHttpsUrlHandler>true</enableHttpsUrlHandler>
<additionalBuildArgs>
<additionalBuildArg>
-H:ReflectionConfigurationFiles=reflection-config.json
</additionalBuildArg>
<additionalBuildArg>
-H:DynamicProxyConfigurationFiles=proxy-config.json
</additionalBuildArg>
<additionalBuildArg>
-H:ResourceConfigurationFiles=resource-config.json
</additionalBuildArg>
</additionalBuildArgs>
</configuration>

Here what is relevant is that we explicitly require that GraalVM includes reflection-config.json file as the reflection configuration file: it contains all the classes that are referenced by reflection. You can download it from here. Where does that file come from? Well, TL;DR, just grep and trial&error.

Disclaimer: the reflection configuration may be missing of some entries. Currently we have no evidence of issues in this pet project, but be prepared to add more classes, especially if you import external dependencies that might make use of reflection.

Here it is proxy-config.json :

[
[
"org.apache.http.conn.HttpClientConnectionManager",
"org.apache.http.pool.ConnPoolControl",
"com.amazonaws.http.conn.Wrapped"
]
]

Finally, resource-config.json:

{
"resources": [
{
"pattern": ".*\\.xml$"
},
{
"pattern": ".*\\.json$"
}
]
}

Here we include all the xml and json files, just as a measure to keep all the resource files.

Now you should be able to compile your native Alexa skill by running the following:

mvn clean install -Pnative -Dnative-image.docker-build=true -Dquarkus.native.enable-jni=true

This command produces the file target/function.zip , which is our native AWS lambda. At this stage it contains just a file named bootstrap , which is invoked whenever the Alexa skill is triggered.

SSL Support

As the final step, we need to add support for SSL. This will unlock all the great features of AWS platform smoothly.

In order to do the trick, we will need to do the following:

  1. Include the libsunec.so dynamic library that comes from the GraalVM Docker image into the function.zip file
  2. Include the cacerts file, which contains all the root trusted certificates, into the function.zip file
  3. Add ppropriate parameters to the invocation of bootstrap to force the usage of the above files

You can get libsunec.so as follows:

$ docker run -ti — entrypoint bash quay.io/quarkus/ubi-quarkus-native-image:19.2.1$ docker cp <container id>:/opt/graalvm/jre/lib/amd64/libsunec.so <project_root>/src/main/resources

It is advisable to take the cacerts file from your host distribution, as it is highly probable that is is more up-to-date than the Quarkus Docker image’s.
Beware: cacertsfile may be a symbolic link and the maven-assembly-pluginplugin includes it in the zip file as a link if you directly refer it from the assembly configuration file.

The inclusion of files in the zip file is pretty straightforward, as all you have to do is to configure your maven-assembly-plugin to add them. Here it is the zip.xml assembly configuration file:

<assembly xmlns="http://maven.apache.org/ASSEMBLY/2.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/ASSEMBLY/2.0.0 http://maven.apache.org/xsd/assembly-2.0.0.xsd">
<id>lambda-package</id>
<formats>
<format>zip</format>
</formats>
<includeBaseDirectory>false</includeBaseDirectory>
<files>
<file>
<source>${project.build.directory}${file.separator}${artifactId}-${version}-runner</source>
<outputDirectory>/</outputDirectory>
<destName>bootstrap.bin</destName>
<fileMode>755</fileMode>
</file>
<file>
<source>${project.basedir}${file.separator}src${file.separator}main${file.separator}resources${file.separator}/cacerts</source>
<outputDirectory>/ssl</outputDirectory>
<destName>cacerts</destName>
<fileMode>644</fileMode>
</file>
<file>
<source>${project.basedir}${file.separator}src${file.separator}main${file.separator}resources${file.separator}bootstrap</source>
<outputDirectory>/</outputDirectory>
<destName>bootstrap</destName>
<fileMode>755</fileMode>
</file>
<file>
<source>${project.basedir}${file.separator}src${file.separator}main${file.separator}resources${file.separator}libsunec.so</source>
<outputDirectory>/ssl</outputDirectory>
<destName>libsunec.so</destName>
<fileMode>755</fileMode>
</file>

</files>
</assembly>

And, after triggering a new build, this is our final zip file result:

$ unzip -t target/function.zip 
Archive: target/function.zip
testing: ssl/ OK
testing: ssl/cacerts OK
testing: ssl/libsunec.so OK
testing: bootstrap.bin OK
testing: bootstrap OK

Please notice that we have renamed the original bootstrap file into bootstrap.bin and we have added a bootstrap script file that specifies the necessary invocation parameters, as follows:

#!/bin/sh./bootstrap.bin -Djava.library.path=${LAMBDA_TASK_ROOT}/ssl -Djavax.net.ssl.trustStore=${LAMBDA_TASK_ROOT}/ssl/cacerts -Djavax.net.ssl.trustAnchors=${LAMBDA_TASK_ROOT}/ssl/cacerts -Djavax.net.ssl.trustStorePassword=changeit

Now the Quarkus native Alexa skill is ready to be deployed.

More importantly, now you are free to use AWS services, such as DynamoDB:

package io.mirko.aws;

import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClientBuilder;

import javax.enterprise.context.ApplicationScoped;
import javax.enterprise.inject.Produces;
import javax.inject.Named;

public class DynamoDBFactory {
@Produces
@ApplicationScoped
@Named
public AmazonDynamoDB createDynamoDB() {
return AmazonDynamoDBClientBuilder.defaultClient();
}
}
// Somewhere else in the code
@Inject
AmazonDynamoDB dynamoDB;

Deployment

The deployment of the lambda is not that different from the non-native one, with some exceptions:

  1. There must be the environment variable DISABLE_SIGNAL_HANDLERS to true : this resolves some incompatibilities between Quarkus and the Amazon Lambda Custom Runtime environment.
  2. The runtime is not Java anymore: since we have create an independent executable, we just need a provided runtime
  3. We could not find any support for layerization: this means that, unless you find a way to split the executable into owining code and dependencies, you have to deploy the whole package all together. This may cause issues with the upload timeout, so what we suggest is to use S3 as a temporary storage for the deployment of your lambda (this is the very same trick we used in the previous article to deploy the dependencies layer in the non-native version of the Alexa skill).

Conclusion

In this article we have briefly described how to implement an Alexa skill using Quarkus native. Albeit a bit tricky, it unleashes the advantages of AOT compilation, which is vital especially to deal with AWS Lambdas cold starts.

--

--