Amazon Alexa Development

Noah Huber-Feely
Noah Codes
Published in
2 min readOct 16, 2016

I recently attended an Amazon Alexa Hackathon in Huntsville, AL, and was thoroughly impressed with the ease of developing “skills”. “Skills” are the equivalent of apps for the voice-based interface. To develop a skill, Amazon already provides many virtually plug-and-play templates which enable you to get a working skill setup immediately. For the most basic example, you can provide random facts to users by simply adding strings to an array with a key value pair. You can find this skill and deploy instructions in this official Amazon GitHub repository.

If you want to become a little more advanced and integrate external APIs or services, adding in this data is very easy. If you are using Node.JS or another back-end with a package management system similar to NPM, you will likely have API interfaces for many things already. For instance, integrating Reddit into a Node.JS skill, simply requires running npm install fetch-reddit and then grabbing posts by running reddit.fetchPosts('r/science').then((data)=> {//Secret sauce});

This code can easily be placed directly into your app logic before returning your response to Alexa. Then all you need to do is include the returned data in Alexa’s response and you are good to go.

Put simply, Amazon’s Alexa back-end takes care of all the hard work for you and allows you to simply respond to certain events. Alexa will take care of the text-to-speech, and then figure out what the user intends to find out or actuate. It then sends the “intent” and the corresponding data to your web service. This removes the friction of parsing the user input or using natural language processing.

To get started make sure to checkout the Alexa GitHb repository linked above or message me (@nhuberfeely on Twitter).

Source: Noah Codes

--

--

Noah Huber-Feely
Noah Codes

Full stack developer and Columbia student. Let’s build a better world.