Update TigerGraph Vertex Data Via REST API Using AWS Alexa

Overview

Voice commands issued to an Alexa-enabled device will trigger the execution of an Alexa Skill. The Alexa skill interprets voice commands and translates them into instructions that are then sent to a Lambda Function that processes the instructions and performs appropriate actions for those instructions. In this example, the Lambda Function will read JSON data from a file in an S3 bucket and load the data (perform an UPSERT operation) into a TigerGraph vertex. This might be useful for many scenarios, like updating existing customer or sales data or updating the metadata that controls a data pipeline application.

VPC Configuration

Lambda Functions, like other AWS assets, are deployed into a subnet of a Virtual Private Cloud (VPC). AWS recommends that Lambda Functions be deployed into at least two subnets for high availability. Because our Lambda Function will use REST API methods to push data into TigerGraph, we must also create a NAT Gateway in our VPC that is accessible by both subnets used by our Lambda Function.

VPN Diagram
Route Table

Upload Data to Amazon S3

First, I will add a data file called business_entities.json to a S3 bucket in my account. I am also uploading an image that I want to display when the Alexa Skill finishes successfully.

S3 Objects
{
"vertices": {
"business entity": {
"17735": {
"type" : { "value" : "company" },
"name" : { "value" : "Kam_Air" },
"url" : { "value" : "http://dbpedia.org/resource/Kam_Air" },
},
"878253": {
"type" : { "value" : "university" },
"name" : { "value" : "Belarusian_National_Technical_University" },
"url" : { "value" :
http://dbpedia.org/resource/Belarusian_National_Technical_University" },
}}}}

Create an Alexa Skill

Alexa Skills are comprised of an Invocation and one or more intents. The Invocation initiates a command and intents allow you to use that command in different contexts. For example, the invocation might be “reload data for graph” and intents might be the graph name or a vertex within the graph. The invocation and the intents together form the complete Alexa command. This way we could write one Alexa Skill to load data into any table in any graph. For our example, however, we will have one skill for one graph vertex.

Choose a skill model
Choose a skill template
Intents and utterances

Create a Lambda Function

Log into the AWS Console and select Lambda from the Services menu. Then click Create function. The function name will be UpdateVertexBusinessEntity and the runtime will be Python 3.7. Then click Create function.

Create a new Lambda Function
Add a trigger
Link to an Alexa Skill
Alexa Skill trigger
Environment variables
def get_s3_conn():
s3_conn = boto3.resource(
‘s3’,
region_name=’us-east-1',
aws_access_key_id=MY_AWS_ACCESS_KEY,
aws_secret_access_key=MY_AWS_SECRET_ACCESS_KEY
)
return s3_conn
def get_tg_auth_token() -> str:
try:
url = "{ip}:{port}/requesttoken?secret={secret}&lifetime={lt}".format(
ip=tigergraph_ip,
port=tigergraph_apitoken_port,
secret=tigergraph_secret,
lt=tigergraph_apitoken_lifetime
)
response = requests.get(url).json()
return response['token']
except Exception as e:
print('Error getting authentication token from TigerGraph: {0}'.format(e))
try:
s3_object = s3_conn.Object(s3_bucket, s3_data_file).get()
payload = s3_object['Body']
except Exception as e:
print("Error getting metadata file from S3: {a}".format(a=e))
return False
try:
url = "{ip}:{port}/graph/{graph}".format(
ip=tigergraph_ip,
port=tigergraph_apitoken_port,
graph=tigergraph_graph_name
)
response = requests.request(
"POST",
url,
headers=request_header,
data=payload
)
except Exception as e:
return_msg = e
else:
return_msg = " reloaded successfully"
json_resp = {
"version": "1.0.0",
"sessionAttributes": {
"TigerGraph": "Load data"
},
"response": {
"outputSpeech": {
"type": "PlainText",
"text": return_msg,
"ssml": "<speak>" + return_msg + "</speak>"
},
"card": {
"type": "Standard",
"title": tigergraph_graph_name,
"content": return_msg,
"text": return_msg,
"image": {
"smallImageUrl": "https://{0}.s3.amazonaws.com/{1}".format(s3_bucket, s3_image_file),
"largeImageUrl": "https://{0}.s3.amazonaws.com/{1}".format(s3_bucket, s3_image_file)
}
},
"reprompt": {
"outputSpeech": {
"type": "PlainText",
"text": "Plain text string to speak",
"ssml": "<speak>SSML text string to speak</speak>"
}
},
"shouldEndSession": should_end_session
}
}

Register the Alexa Skill as a trigger for the Lambda Function

In the upper right-hand corner of the Lambda Editor window, click the copy icon next to the lambda ARN:

Test!

We can test the new skill directly in the Alexa Developer Console. Simply click on the Test link on the top menu and make sure that skill testing is enabled for Development. You can then enter the text for the new skill. Simply update vertex should be sufficient:

Input and output
Alexa screen results
TigerGraph Studio query results

Conclusion

I hope you found this post informative. This method for linking an Alexa Skill with a Lambda function is a quick and simple way to load graph data, but this pattern could be equally applicable for more complex interactions. In a future post I will explore other scenarios like voice-activated (and touchless) patient check-in functionality or a graph search application that finds health care providers based on proximity to a user’s location.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Kelley Brigman

Kelley Brigman

Principal at Slalom Consulting. Passionate about all things data, programming and cloud.