The Alexa Project, Part 2: Get Visual and Get An Alexa Endpoint in Just 5 Minutes

Cristiana Umbelino
OutSystems Experts
Published in
7 min readMay 4, 2017

Hello, ladies and gentlemen… I’m back! Not only that, but I’m bringing sexy back. And by “bringing sexy back,” I mean “bringing rapid visual development to code crafting for an Alexa endpoint.”

In my previous article, which I hope you have already read (or are going to read right now), I showed how easy it is to set up the intents and utterances that Alexa will reply to. This inspired another of my colleagues to write his own take on Alexa, which was awesome!

Interestingly, since my first article was published, Amazon has announced Skills Builder: a new (still in beta) tool that will make that process even simpler. Or, actually, more intuitive. Alexa Skills Builder has a few extra nifty features — like support for better natural and multi-turn dialogs — but I still think the way I went about it is the simplest and fastest way to get started.

And now, as I promised in Part 1, I’m going to share how you can visually develop an Alexa endpoint. In fact, I’m going to up the ante and show you how to visually develop that Alexa endpoint in just 5 minutes.

For Our Alexa Endpoint: Let’s get Visual, Visual!

I work with a visual development platform, so get ready for some heavy images, y’all. Since I’m a millennial and we all love gifs, the story of my 5-minute Alexa endpoint is going to be a GIF-fest.

By the way, if you want to follow along:

  1. Having an Amazon Echo or Echo Dot helps immensely.
  2. Installing the OutSystems free personal environment will be your first step (if you haven’t done this already).
  3. Being awesome also helps a lot. You’re here, so I’m assuming this to start with.

Hello, World! It’s-A-Me, Alexa!

Are you all set up in your OutSystems personal environment? Ready for the fastest and most amazing “Hello, World” on the planet?

Let’s create an empty application and publish it to the server, then:

Did this go a little too fast, for you?

Let’s recap, then. What I did was:

  1. Create a new application — I chose Web App. Gave it a cool and original name such as “Alexa” and chose blue because it’s a lovely color.
  2. Click “Create App”.
  3. Add an empty module named “AlexaConnector”.
  4. Press the “1-click” Publish Button.
  5. Fetch my coffee mug.
  6. Come back too fast because my “Hello, world” app had already been created when I sat back in front of the computer.

It’s Now Time To Give It a REST (Endpoint)

This is when you get RESTful.

Now we need to create the REST endpoint that Alexa will talk to — this is the receiving end of our requests in the Alexa skill configuration:

Too quick once again? No worries. Here’s what to do:

  1. Click the Logic tab at the top right of your OutSystem IDE window.
  2. Right-click REST in Integrations, and select the option Expose REST API.
  3. Rename it as you will. I named mine AlexaConnector.
  4. Right-click on your connector and Add REST API Method.
  5. Rename it GetInfo.

Tidying Up

Now’s the time to edit our connector and tidy things up a bit. Because I read through the Alexa documentation, I know that I need to change the default method from GET to POST, and that I need to add Request and Response parameters:

The steps I followed here were:

  1. Change HTTP Method on GetInfo to POST.
  2. Right-click GetInfo and add input parameter Request.
  3. Right-click GetInfo and add output parameter Response.

I need to add a note here: we manipulate JSON by matching the values to an “OutSystems structure.” I went ahead and created it beforehand. There’s no GIF for this part, because the JSON samples are waaaay too long.

After setting the Request and Response data structures, I needed to define the parameters in our connector to match those structures:

Here’s how to do it:

  1. Click the Logic
  2. Change the Data Type on input parameter Request to match the respective JSON structure we defined previously — named Request.
  3. Change the request to be received in the Body.
  4. Change the Data Type on output parameter Response to match the respective JSON structure (named accordingly as… Response).

From here on we can start seeing the Alexa endpoint magic happen. If you ask Alexa to give you the error report, you can check the logs and see that the request has arrived. We just don’t have the logic to answer it cleverly. Yet.

Can I Give You a Hand?

The logic for giving an error report is different from the one that’ll get you a slowness report. So, it’s important to understand what the user is asking for and get the right code running. That’s where an Intent Redirector comes in:

A little bit more on this before I go on: I created a static entity (Intent) to map the intent name that I will receive from Alexa with an entity identifier. This will make it easier to develop accordingly to the user’s intent. In this case, the intent names are GetErrorReport and GetSlownessReport. So, for example, the switch condition for GetErrorReport is:

IntentId = Entities.Intent.GetErrorReport

Also, even though I have only one condition shown in the GIF, I’m using a switch condition because I added more conditions later in development.

Getting the Right Answer In

It’s time to put all the pieces together and test it out:

Once again, if that was too fast — it was the sped-up version, after all — here’s what to do:

  1. Query the static entity (Intent) to get the ID of the Intent that maps with the name we receive from the REST Request.
  2. Called IntentRedirector (the action developed in the previous section) and gave it the IntentId that comes from the query. The input IntentId will redirect to the logic and give the right kind of report.
  3. Assigned the IntentRedirector response to the Response

And… voilà! As soon as I pressed that green 1-click publish button I had an endpoint set that I could test with my Echo Dot.

Let me just add a few notes about this amazing visual journey to an Alexa endpoint:

  1. The logic in the GetReportOnErrors action (the orange circles) is just a simple query to the log tables in our visual development platform. It’s much like the one I did in the gif when querying the Intent static entity by the Intent name present in the JSON request.
  2. Since I am developing to answer a human being (or so I hope), the response should be a phrase and not just the output of a query. The way I implemented the skill was always with that in mind. Therefore, the actions to get the reports will always output speech and not just numbers or strange error messages.
  3. Besides this was a really quick test to what I could do with Alexa skills, I never planned to make this skill available to the public since this might contain sensitive information. That’s why I don’t go through the security guidelines that Amazon requires for accepting the skill submission.

Time’s Up!

And that’s it! That’s just how quick setting it all up was. I knew it right from the start — the chore was never truly daunting, because creating REST endpoints in OutSystems is just that simple.

So, did you have a stopwatch running? The last time I did it, I timed it at just under 5 minutes. But, if it takes you a few extra seconds the first time around, it’s all good.

Feel free to give me some feedback on this article, and reach out for any questions related to the Alexa skills, or to OutSystems development. It goes without saying that I’m kind of expecting some new Alexa skills to be showcased in the comments.

Let’s just not make these machines sentient just yet, though, OK?…

I’m not quite ready for Alexa to phone home yet.

Cristiana Umbelino | Code Crafting Expert at OutSystems

--

--

Cristiana Umbelino
OutSystems Experts

I have a slight addiction to TV shows and movies, as well as a more serious one to crafting code for the Digital Team at OutSystems.