Exploring the backend specifications generated by AWS Amplify API
This post takes a complex example GraphQL schema and walks through the backend specifications that AWS Amplify generates from it.
When creating a mobile application with AWS Amplify, getting the API right is one of the most important tasks. Getting your API wrong can mean a slow, and an expensive, application, with issues like inefficient queries, poorly chosen partition and sort keys, too many tables, or poorly designed secondary indexes.
Normal GraphQL schemas specify a frontend API that you can make queries and mutations against. You then build a backend for that schema that contains data sources and the resolvers that map the frontend GraphQL operations onto those data sources.
In AWS Amplify, you simultaneously specify both the frontend and backend of your API with a special variant of a GraphQL schema.
This special schema contains normal GraphQL schema language that specifies the frontend, but also contains annotations like @model
, @key
, @connection
, and @auth
that specifies the backend that will be generated by AWS Amplify.
The AWS Amplify API CLI then takes your special annotated schema and generates a filled-in frontend GraphQL schema, and the backend: DynamoDB tables, IAM roles, and resolvers that map frontend GraphQL operations to the DynamoDB tables.
While I was learning how to use the AWS Amplify API, the hardest part was to understand the mapping from annotations, like @key
and @connection
, to the generated backend specification. Since then, the AWS Amplify team has created a great example schema that shows off many of the backend options.
As useful as any documentation is, the way I truly understood how the AWS Amplify API system worked was to dig in and explore the backend specifications that were generated by AWS Amplify from my schemas. If that’s how you learn as well, then this post is for you.
In this post, we’ll:
- Explore the filled-in frontend GraphQL schema generated from your special annotated schema by AWS Amplify at
amplify\backend\api\exploreamplifyapi\build\schema.graphql
- Explore the DynamoDB tables generated from the
@model
and@key
annotations by AWS Amplify atamplify\backend\api\exploreamplifyapi\build\stacks
- Explore the resolvers generated by AWS Amplify at
amplify\backend\api\exploreamplifyapi\build\resolvers
- Observe and debug the behavior of the resolvers generated by AWS Amplify.
What we won’t do is examine the backend consequences of @auth
directives. Those are very complex, and are influenced by things like “which did you specify as your primary authorization type when you set up the API?”
All of this is complicated to take in, but once you understand what’s going on, you’ll truly understand the implications of your API choices. The advantage of the API system of AWS Amplify is that you don’t need to write this code from scratch, you can explore, then modify if necessary, what has been generated.
The example code for this post uses React Native 61.5, and is at https://github.com/dantasfiles/ExploreAmplifyAPI
Initial Setup
Create a new React Native project> npx react-native init ExploreAmplifyAPI
Initialize AWS Amplify, using the default settings
> amplify init
Add an API with AWS Amplify> amplify add api
Overwrite amplify\backend\api\exploreamplifyapi\schema.graphql
with the example schema.
Run the AWS Amplify API mocking tool in the background to generate all of the backend specifications that we will examine.
An advantage of this tool is that if you change the annotated schema in schema.graphql
, the API mocking tool running in the background will regenerate the backend specifications on-the-fly, making it easy to explore how changing your annotated schema affects the backend.
> amplify mock api
GraphQL schema compiled successfully.
Creating table OrderTable locally
Creating table CustomerTable locally
Creating table EmployeeTable locally
Creating table WarehouseTable locally
Creating table AccountRepresentativeTable locally
Creating table InventoryTable locally
Creating table ProductTable locally
Running GraphQL codegen
Review of your annotated GraphQL model
First, a review of the special annotated schema you created. Remember that you placed your special annotated schema with AWS Amplify annotations into amplify\backend\api\exploreamplifyapi\schema.graphql
We’ve chosen to use the example schema, and the AccountRepresentative
model in particular for examples. You can see the @model
, @key
, and @connection
special annotations.
type AccountRepresentative @model
@key(name: "bySalesPeriodByOrderTotal",
fields: ["salesPeriod", "orderTotal"],
queryField: "repsByPeriodAndTotal") {
id: ID!
customers: [Customer] @connection(keyName: "byRepresentative",
fields: ["id"])
orders: [Order] @connection(keyName: "byRepresentativebyDate",
fields: ["id"])
orderTotal: Int
salesPeriod: String
}
Part 1: GraphQL Schema generated by AWS Amplify
In this section, we explore the filled-in frontend GraphQL schema generated from your special annotated schema by AWS Amplify at amplify\backend\api\exploreamplifyapi\build\schema.graphql
Unlike the special annotated schema, this generated file contains the actual GraphQL schema that your app will follow when it perform GraphQL operations.
The first thing to notice is that the AWS Amplify annotations are gone, and any @connection
queries, like customers
and orders
, have been inserted.
type AccountRepresentative {
id: ID!
customers(id: ModelIDKeyConditionInput,
filter: ModelCustomerFilterInput,
sortDirection: ModelSortDirection,
limit: Int,
nextToken: String): ModelCustomerConnection
orders(date: ModelStringKeyConditionInput,
filter: ModelOrderFilterInput,
sortDirection: ModelSortDirection,
limit: Int,
nextToken: String): ModelOrderConnection
orderTotal: Int
salesPeriod: String
}
AWS Amplify generates three AccountRepresentative
queries that your app can use. The third repsByPeriodAndTotal
query was generated by the annotation @key(..., queryField: "repsByPeriodAndTotal")
type Query {
getAccountRepresentative(id: ID!): AccountRepresentative
listAccountRepresentatives(
filter: ModelAccountRepresentativeFilterInput,
limit: Int,
nextToken: String
): ModelAccountRepresentativeConnection
repsByPeriodAndTotal(
salesPeriod: String,
orderTotal: ModelIntKeyConditionInput,
sortDirection: ModelSortDirection,
filter: ModelAccountRepresentativeFilterInput,
limit: Int,
nextToken: String
): ModelAccountRepresentativeConnection
}
input ModelAccountRepresentativeFilterInput {
id: ModelIDInput
orderTotal: ModelIntInput
salesPeriod: ModelStringInput
and: [ModelAccountRepresentativeFilterInput]
or: [ModelAccountRepresentativeFilterInput]
not: ModelAccountRepresentativeFilterInput
}
type ModelAccountRepresentativeConnection {
items: [AccountRepresentative]
nextToken: String
}
AWS Amplify generates three AccountRepresentative
mutations (create, update, and delete) that your app can use
type Mutation {
createAccountRepresentative(
input: CreateAccountRepresentativeInput!,
condition: ModelAccountRepresentativeConditionInput
): AccountRepresentative
updateAccountRepresentative(
input: UpdateAccountRepresentativeInput!,
condition: ModelAccountRepresentativeConditionInput
): AccountRepresentative
deleteAccountRepresentative(
input: DeleteAccountRepresentativeInput!,
condition: ModelAccountRepresentativeConditionInput
): AccountRepresentative}
input CreateAccountRepresentativeInput {
id: ID
orderTotal: Int
salesPeriod: String
}
input UpdateAccountRepresentativeInput {
id: ID!
orderTotal: Int
salesPeriod: String
}
input DeleteAccountRepresentativeInput {
id: ID
}
input ModelAccountRepresentativeConditionInput {
orderTotal: ModelIntInput
salesPeriod: ModelStringInput
and: [ModelAccountRepresentativeConditionInput]
or: [ModelAccountRepresentativeConditionInput]
not: ModelAccountRepresentativeConditionInput
}
The main idea is that you should examine
build\schema.graphql
in order to understand the details of the GraphQL schema that your app will actually be using.
Part 2: DynamoDB Tables generated by AWS Amplify
In this section, we explore the DynamoDB tables generated from the @model
and @key
annotations by AWS Amplify at amplify\backend\api\exploreamplifyapi\build\stacks
For each @model
in your AWS Amplify API schema, a DynamoDB table will be created, and the details of that table will be specified in the *.json
AWS CloudFormation template file in the build/stacks
directory.
For the AccountRepresentative
model in our schema, an AccountRepresentative
DynamoDB table is specified by the AccountRepresentative.json
template file.
The @key
annotations in your special annotated schema generate the KeySchema
and GlobalSecondaryIndexes
specifications in the templates.
{
"Resources": {
" AccountRepresentativeTable": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"KeySchema": [
{
"AttributeName": "id",
"KeyType": "HASH"
},
]
"GlobalSecondaryIndexes": [
{
"IndexName": "bySalesPeriodByOrderTotal",
"KeySchema": [
{
"AttributeName": "salesPeriod",
"KeyType": "HASH"
},
{
"AttributeName": "orderTotal",
"KeyType": "RANGE"
}
],
}
]
}
}
}
}
Because no default @key
annotation was specified, the AccountRepresentative
table has the default id
partition/hash key.
As specified by the annotation @key(name: "bySalesPeriodByOrderTotal", fields: ["salesPeriod", "orderTotal"]...)
the Inventory
table will have a global secondary index called bySalesPeriodByOrderTotal
with a partition/hash key of salesPeriod
and a sort/range key of orderTotal
The main idea is that you can examine the template file and make sure the generated DynamoDB tables will be efficient when it comes to their use of partition/hash keys, sort/range keys, and global secondary indexes. If not, go back and change the
@key
annotations in your original annotated GraphQL file.
Part 3: Resolvers generated by AWS Amplify
In this section, we explore the resolvers generated by AWS Amplify at amplify/backend/api/exploreamplifyapi/build/resolvers
Resolvers generated by AWS Amplify are specified by two files: a *.req.vtl
request mapping template to “translate an incoming GraphQL request into instructions for your backend data source”, and a *.res.vtl
response mapping template “to translate the response from that data source back into a GraphQL response.”
Pay particular attention to when the resolver accesses the context: $ctx.args
(the values passed into the GraphQL operation) for request mapping templates, and $ctx.result
(the values passed from the data source) in response mapping templates.
The main idea is that by looking inside the
*.req.vtl
and*.res.vtl
files, you can understand the details of how each GraphQL operation will work.
For example, here is a simplified excerpt from the resolver for the repsByPeriodAndTotal
GraphQL query as specified by Query.repsByPeriodAndTotal.req.vtl
#set( $modelQueryExpression.expressionValues =
{ ":salesPeriod": { "S": "$ctx.args.salesPeriod" } } )
$util.qr($modelQueryExpression.expressionValues.put(
":sortKey", { "N": "$ctx.args.orderTotal.eq" }))
#set( $QueryRequest = {
"operation": "Query",
"query": $modelQueryExpression,
"index": "bySalesPeriodByOrderTotal"
} )
Recall from the previous section that bySalesPeriodByOrderTotal
was a global secondary index of the AccountRepresentative
table with a partition/hash key of salesPeriod
and an orderTotal
sort/range key of orderTotal
So you can see in this code that the repsByPeriodAndTotal
GraphQL query uses its salesPeriod
and orderTotal
arguments to perform a DynamoDB Query operation on the bySalesPeriodByOrderTotal
global secondary index.
In general, getX
GraphQL operations operations perform GetItem DynamoDB operations, listX
GraphQL operations perform either Scan DynamoDB operations or Query DynamoDB operations, depending on whether the app specifies arguments to the query function or not. You can examine the resolver code to make sure your app performs as few Scan operations as possible due to their large expense.
For mutations, createX
GraphQL operations perform PutItem DynamoDB operations, updateX
GraphQL operations perform UpdateItem DynamoDB operations, and deleteX
GraphQL operations perform DeleteItem DynamoDB operations.
Resolvers: Connections
Another thing to look at is how the connections between GraphQL object types work. Recall the line customers: [Customer] @connection(keyName: "byRepresentative", fields: ["id"])
in the original AccountRepresentative
type.
If you look at a simplified excerpt from the AccountRepresentative.customers.req.vtl
file, you can see what happens when you look up the customers from within an AccountRepresentative
#set( $query = {
"expression": "#partitionKey = :partitionKey",
"expressionNames": {
"#partitionKey": "accountRepresentativeID"
},
"expressionValues": {
":partitionKey": { "S": "$context.source.id" } } } )
{
"operation": "Query",
"query": $util.toJson($query),
"index": "byRepresentative"
}
This excerpt performs a Query DynamoDB operation using the id
of the current AccountRepresentive
as the accountRepresentativeID
key into the byRepresentative
global secondary index of the Customer
table.
Resolvers: Access Control
Finally, resolvers contain the access control code that AWS Amplify generates from your @auth
annotations. That access control code uses the $ctx.identity
variable. Access control is important enough that I would highly recommend examining your generated access control code manually at some point to make sure that it’s behaving as you intended.
I explored some of the details of AWS Amplify access control in Owner vs. Group Access Control in AWS Amplify API
Part 4: Debugging resolvers
Finally, sometimes you’ll run a query or mutation, and something isn’t working, or you want to determine exactly what is going on inside the resolver.
If the API mocking tool (amplify mock api
) is running, you can debug the behavior of your resolvers by opening the amplify/backend/api/exploreamplifyapi/resolvers
directory (note resolvers
in this case, not build/resolvers
as in the previous section).
You can then insert debugging statements using the $util.error
method, passing it the object whose value you want to be printed.
This method works whether you are viewing the Amplify GraphQL Explorer at localhost:20002 (the error message will be printed in the results on the right side of the webpage), or calling the mock API from the actual app running on your phone (the error message will be printed in the console generated by npx react-native run-android
)
A downside is that, since it is an error, the execution of the resolver will cease at the point of the error, rather than continuing on as in normal logging. So you can only use one of these statements at a time to debug your resolvers.