Running a free API in AWS using GraphQL

Yuri Luiz de Oliveira
WAES
Published in
22 min readJun 23, 2022
You read it right!

In this article, I will explore how to run a free API in AWS. And, as a bonus, it will be using GraphQL.

Table of Contents

Motivation

It might look too good to be true, and you’re probably thinking: “There’s no way that my company’s API that handles thousands of requests per second will be free”. And that’s true. There is no chance of that happening (at least not with what we will use in this article). Then, why should you read it?

I have an API that I use to keep track of every savings goal that I have. Things like emergency reserve, buying something specific, traveling. It’s hosted in AWS, which is very handy.

Now, imagine going through a lot in your life: changing jobs while preparing to move abroad, arriving and starting life in a new country, starting a project in your new company… And suddenly, you’re charged US$ 50 by AWS for the API you use less than a dozen times a month. This happened to me, and it was very frustrating.

It turns out that the resources used to run the savings API were all free for 12 months, and this period ended while I was still organizing my new life.

Personal or small projects

So, this is the first big reason to read this article: it shows how AWS enables personal or small projects by using the proper tools.

Cost efficiency

The tools we will use are cost-efficient, providing services that are paid for how much you use them, not for how long (i.e., no idle resources being paid even if not used).

Serverless

It sounds logical to pay for resources waiting on you to do something with them, doesn’t it? But what happens if I don’t make any request to the API in a specific month? Well, you pay for it. As long as the compute instance is up and ready to respond to your requests, AWS charges you for it (and that’s fair!).

Now, imagine if you could have an API that’s always ready to respond to requests but only runs when a request is received. That’s the idea of AWS’ lambda functions. It’s only turned on, therefore charged, when you need it. No need for a compute instance awaiting on stuff to do.

Lambda is an event-driven serverless compute service of AWS that runs code triggered by hundreds of event types (such as API requests). Lambda is the core of the free API we’re building.

GraphQL

Well, we could just go straight to a REST API, which most backend developers are familiar with, but we could take this opportunity to learn and try something new.

What’s GraphQL?

GraphQL is a server-side runtime and query language. It runs in HTTP protocol with GET and/or POST methods (so we already know a lot about it).

Shared language

It has a shared language between the API's consumers (e.g., website, mobile app) and producers (backend server). It’s not bound to any specific programming language or framework. We can use GraphQL anywhere that contains a tool or library that understands its language.

This shared language allows us to have a powerful schema and query language. We’ll explore why they’re so powerful in the code section.

Versionless API

When making a change to an API, it can make the consumer communication break. Every change might be a breaking change. For example, if a new field is added in the response, is the consumer prepared to handle that field? Can you be sure it’s not going to cause unexpected errors?

In GraphQL, every query must contain all fields that should be returned. It means that added fields are not returned unexpectedly, ensuring that it won’t break the consumer’s response handling, making the API less likely to add breaking changes.

It does not mean that the API is breaking change proof. Removing fields would cause errors, for example.

Core concepts

Before going into code, it’s important to define some concepts that we’re going to use:

  • Schema: the definition of your API. Its types, queries, and mutations.
  • Entity: a type within your schema. It’s similar to REST’s resource.
  • Query: it’s used to fetch data. As a convention, queries should not cause side effects. It’s similar to GET requests to REST resources.
  • Mutation: it’s used to change/update data. Conventionally mutation causes side effects. It’s similar to POST/PUT requests to REST resources.

GraphQL’s request anatomy

GraphQL is usually represented by JSON requests. Since it runs in HTTP, the requests look a lot like REST requests.

{
"method": "post",
"headers": {
"Content-Type": "application/json"
},
"body": {
"query": "{
goals {
id
}
}"
}
}

The most significant difference between a usual REST JSON request is the body. It’s composed of a query field in the root of the JSON. The inner string looks like a JSON string, but it’s not. You possibly spotted some of the differences already:

  • The query and field names are not surrounded by double quotes;
  • No colons before curly braces{ ;
  • The fields are declared without a value.

It’s time to code!

Pre-requisites

Github repo

I provided a public GitHub repo with the code for a working API. Feel free to fork or clone the repo if you don’t want to follow the steps.

Initializing the project

The API will be written in NodeJS. Therefore, create the project by running:

$ yarn init

Follow the steps so the package.json file will be created. Also, create a folder src at the same level as the package.json.

apollo-server-lambda

To achieve a GraphQL API in AWS Lambda, we will use the apollo-server-lambda library for NodeJS. It provides a GraphQL server on top of Express. First, we must install apollo-server-lambda and graphql.

$ yarn add apollo-server-lambda graphql

Schema

As I mentioned before, I had a Savings API that I used to keep control of all my savings goals. And we’re going to create a lite version of it with GraphQL.

The first thing we want to do is define our savings goal type, having the following fields:

  • id: The ID of the Goal
  • title: What we’re saving for (e.g., “Emergency reserve”)
  • savedAmount: How much we’ve saved so far
  • targetAmount: How much we want to save in total
  • description: Some more explanation of the savings goal (e.g., “Six times my current salary”).
  • targetDate: The deadline to have saved the money

And we’ll have the following rules for our goals:

  • A goal must always have an ID;
  • A goal must always have a title;
  • When we’ve saved no money for a goal, it should return zero as savedAmount.

We’ll define our schema in the main file. So, create src/graphql.mjs and copy the following content.

Type Goal definition inside of the schema

Here we start to see the powers of GraphQL’s schema. We can define the type of each field, including if the field is nullable or not. The exclamation point ! determines that a field is non-nullable.

GraphQL embraces the type definitions in a way that if the backend service returns something that does not respect the schema, an error will be returned. Providing trustiness in the definitions.

I like to say the following when having discussions about code documentation:

Every documentation that needs manual update becomes organically outdated

What I mean by this is that we should keep documentation as close to code as possible and update it automatically whenever possible. GraphQL’s schema addresses this issue by providing the shared language and native validation whether the schema is respected or not, without the need for external validation libraries, for example.

Going ahead with the code, we define the type Query. This type is reserved by GraphQL and must be at the root of the schema. This is where we define our queries. We’ll add two queries: goaland goals.

Type Query definition inside of the schema

The goal query fetches one goal by the id. It expects a non-nullable argument id and returns a nullable Goal. In case there is no goal with the provided id the resulting data will be null.

The goals query fetches all goals. It expects no argument and returns a non-nullable list of non-nullable Goal. It means that every Goal returned within the list is going to be not null and the list itself is always going to be not null. In case there are no goals, an empty array will be returned.

At last we define out type Mutation. It’s also reserved by GraphQL and must be in the root of the schema. Every mutation will be within it. In this example we’ll have only one mutation: addGoal.

The addGoal mutation adds a new goal. It accepts every field from the type Goal, but the declaration is different. In the addGoal mutation id and savedAmount are not mandatory. Id will be set by the service, if no value is given and savedAmount will be zero if it has no value.

Our full schema will, then, look like this:

Full schema

Tip: If you’re using VSCode, the GraphQL extension (from GraphQL Foundation) will highlight the typeDefs string and it’ll become easier to understand its syntax. 😉

Resolvers

Now that our schema is defined, we must provide implementations for our queries and mutations. The implementations are given in an object that contains a Query and a Mutation attribute. Inside the Query object we must have one attribute for each query, respecting its name. Inside the Query, there’ll be two attributes:goal and goals. At first we’ll just return a hardcoded Goal.

Query resolvers

The same for the mutations. We must have one attribute for each mutation respecting its name. As follows:

For now, addGoal mutation does not create anything, it just returns the a goal with the values received as argument, parsing the id and savedAmount so they respect our schema.

The last step to finish our main file is to create and export the server, providing our type definitions and resolvers.

The complete implementation of our graphql.mjs file will be:

Full implementation of graphql.mjs

Running the API

Now we have enough to run and test the first version of our API. To run the API locally, we’re going to use AWS’ Serverless Application Model (SAM). It’s an open-source framework for building serverless applications and it allows to run a docker based Lambda environment in your machine. If you don’t have it installed yet, please follow AWS SAM’s documentation.

AWS SAM APIs must have a template.yaml file:

Template.yaml defining API routes

It simply defines a Lambda function that’s triggered by an HTTP request in the route /graphql. If you’re not familiar with SAM or (CloudFormation), this file represents every resource that we want to create/manage for this API. We’ll add some more resources later on.

For now, this is enough to run our API locally. To do so, run:

$ sam local start-api

Please, check if you have Docker and SAM installed. If the API started successfully you should see a message ending with:

Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)

Querying the API

Since GraphQL runs on top of HTTP you can query it with JSON requests. But, there are some tools that make our life easier. You could use GraphQL playground, for example.

During the development of this project I found out that Postman has a quite good integration with GraphQL. Considering that most of backend developers already know Postman, we’ll use it in this article examples. If you want, feel free to use another tool.

First, create a POST request to http://127.0.0.1:3000/. In the “Body” tab you’ll see some options to select the body type, like “none”, “form-data”, “raw”. One of those options will be “GraphQL”. Select this one. Then, inside “Query” field, add the following content:

Now, hit “Send”. The request will look like:

Did it take too long to process? Almost 10 seconds? More than 10 seconds? That’s one of the challenges of developing serverless APIs with AWS Lambda. It takes some time to start the function. Before it can start handling the request, the environment must be loaded. If you call the API again, it should take less time to process.

In case of local development with SAM, the lambda functions run in Docker containers. One container for each Lambda function (that’s why it’s handy to have a single endpoint).

To handle this, SAM provides a CLI option in the start-api command: --warm-containers. The possible values are:

  • EAGER: the docker containers will be loaded on startup of the API.
  • LAZY: The docker container for each Lambda will be loaded when it’s requested, but the container will be persisted between invocations.

I recommend using --warm-containers so it won’t take too long to respond every call. Then, our start-api command will look like:

$ sam local start-api --warm-containers EAGER

At this moment, I suggest starting to add scripts to your package.json. Inside the file, add a “scripts” attribute in the root of the JSON, if it isn’t there yet. Add a “script:graphql” attribute and copy the command above in its value. Now you can start the API by running:

$ yarn start:graphql

It’s time to start playing with the API! If you look carefully in Postman, you should see a “Auto-fetch” or “No schema” button. Make sure “Auto-fetch” is selected and click the circular arrow. You should see a “Schema fetched” green text. It means that Postman successfully received the schema of our GraphQL API.

Schema fetched Postman

If the schema is fetched, Postman will provide autocomplete and type validation in our queries. Test it yourself: type “target” and let Postman show you the options for “targetAmount” and “targetDate”.

The query for all goals including every field is the following:

And, to query a goal by id:

To request a mutation, the “mutation” preffix has to be added to your query and the arguments must be provided between parenthesis. The fields fetched are the same as normal queries, as follows:

DynamoDB

I think it’s fair to say that this API is not useful without a way to persist the goals we created. So far, everything returned is hardcoded and nothing will be saved. Luckily, AWS provides a free (to a certain limit) serverless datastore solution: DynamoDB.

DynamoDB is a NoSQL database where you manage only the tables. No need to provision instances to run your database. All you we need is to create the tables and configure them. We won’t get into a lot of details in DynamoDB, since it has a lot of content to cover.

Running DynamoDB Locally

The first step is to run DynamoDB locally. AWS has a tool that emulates its services locally: localstack. We’re going to run it with docker and use a docker-compose file to make some configuration automatically.

Create a file named docker-compose-localstack.yaml with the content:

It’ll run localstack and expose it in port 4566. Now run:

$ docker-compose -f docker-compose-localstack.yaml up -d

Since our docker-compose file name is not the default, we must provide it with the -f option. An emulated version of some core AWS’ services is now running in your local machine.

You can play with it if you want, just keep in mind that to run commands against it you must provide the --endpoint-url=http://localhost:4566 option.

Creating DynamoDB table

DynamoDB is schemaless which means that we only need to define its keys. The non-key fields are not defined beforehand. For example:

The above JSON is the definition for our goal table. Although our GraphQL’s schema has a lot more fields in DynamoDB’s table definition we only describe the table name and the key attributes and key schema. A quick explanation of the file above:

  • TableName: The table’s name. The table arn is compose of the AWS’ account id and table name, ie the table’s name must be unique in the AWS account;
  • AttributeDefinitions.AttributeName: The attribute’s name. Must be unique in the table;
  • AttributeDefinitions.AttributeType: The type of the attribute. “S” defines a String type. There’s a page in AWS’ documentation with the DynamoDB types and Java supported types (unfortunately, there’s Javascript version of this page).
  • KeySchema.KeyType: The type of the key (“HASH” or “RANGE”). “HASH” defines a partition key and “RANGE” defines a sort key. This is a concept quite important for DynamoDB (and NoSQL databases in general). If you’re not familiar with what are partition and sort keys, please read AWS’ documentation. I’m not going to explain them here because they have a lot of implications in the data model. For this article I gently ask you to trust that I did a good job defining the key type 😁. Attention: If you’re going to run it in a production environment, please give a good read into DynamoDB’s documentation.
  • KeySchema.AttributeName: The attribute’s name. This is not where we define the attribute’s name. It is the attribute name defined in “AttributeDefinitions.AttributeName”, so it must match the name defined there.
  • ProvisionedThroughput: It’s the “processing power” that our table will have. The higher value given to “ReadCapacityUnits” the more read capacity we’ll have in our table. Te higher value goes to “WriteCapacityUnits” the more write capacity we’ll have in our table.

Copy the content of the JSON file above and paste it into src/dynamodb/goalTable.json. To create the goal, run:

$ aws --endpoint-url=http://localhost:4566 --region=us-east-1 dynamodb create-table --cli-input-json file://src/dynamodb/goalTable.json

It creates the DynamoDB table locally in us-east-1 region with the definition provided in src/dynamodb/goalTable.json.

If you run into errors, please run aws configure, follow the steps and set region as us-east-1.

Configuring DynamoDB client

To configure the Javascript DynamoDB client we must start by installing the SDK. So, run:

$ yarn add @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb

Then, create thesrc/dynamodb/client.mjs file with content:

Observe that the endpoint is configured as http://localstack:4566. The API and DynamoDB are running inside different Docker containers, which means they won’t be able to communicate via localhost, because localhost inside the Docker container points to the container’s loopback, not the host’s.

Now, if you look at the docker-compose-localstack.yaml, you’ll see that there’s a network named “graphql-free”. This implies that the localstack container is running inside a virtual bridge network. And the API container should run inside it as well. So, we should update the script that starts the API to:

$ sam local start-api --warm-containers EAGER --docker-network graphql-free

The --docker-network option sets the virtual docker network where the container will run.

Implementing DynamoDB integration

I want to help you avoid trouble with DynamoDB interfaces. Therefore, I’ll address something already: DynamoDB expects input and results in output in a specific format, due to its schemaless nature. As I mentioned before, we only define the key attributes on table creation.

The type of non-key attributes are defined when creating/updating each item (you create fields by having them in your item). As a result, when creating or querying values we must describe the field types, along with its values.

If we want to create a goal, instead of sending a Item like:

What will be sent is:

“S” means that the field is of type string and “N” means it’s of type number.

Thanks to AWS’ amazing team, there’s a “special” client that marshalls the input and unmarshalls the response from DynamoDB, meaning we don’t need to care about those convertions. Just make sure to use DynamoDBDocumentClient and Commands from @aws-sdk/lib-dynamodb instead of DynamoDBClientdirectly.

Last but not least, we implement the communication with DynamoDB:

To finish the code, we just need to change a little bit the main file. The resolvers will use the goalClient to save data to and fetch from DynamoDB. The final version of src/graphql.mjs will be:

In the addGoalmutation we added a unique id in case no id is provided. For this to work, you need to install uuid package:

$ yarn add uuid

Now, if you start the API again, you should be able to create and query the goals and they will be persisted.

Deploying

We’re almost ready to deploy the API to AWS. If you look at template.yaml, you’ll see that we defined only one Resource: the GraphQL Lambda function. But we added a DynamoDB table to our architecture and our template does not have any definition for that.

Let’s, then, create a new file: template-prod.yaml. We are not using the same template file that we use to run the application locally, because we want to have different configurations for local and production environments (faster timeouts in production, for example). The content of this file will be:

We added two important things:

  • GoalTable: defines our DynamoDB table.
  • Policies: defines an inline policy for the Lambda function to have access to the DynamoDB table. This policy allows dynamodb::GetItem, dynamodb::PutItem and dynamodb::Scan operations on the table that we define below.

Now we should have enough to deploy the API AWS. The deployment will be made with CloudFormation, which is the Infrastructure as Code tool of AWS. The first thing we need is to create an S3 Bucket where the artifacts of our API will be stored:

$ aws s3 mb s3://graphql-free-cf-template --region=us-east-1

You can use the AWS console to create the bucket if you prefer. This is a one-time step. You won’t need to create it again later.

Then, package the code using the package command from CloudFormation CLI:

$ aws cloudformation package --template-file template-prod.yaml --output-template-file cf-template.yaml --s3-bucket graphql-free-cf-template

This command packages the local files that CloudFormation references and copies it to the S3 bucket. It returns a copy of the template file. Our input template file is template-prod.yaml and and the output template file is cf-template.yaml.

Then we run the CloudFormation’s deploy command:

aws cloudformation deploy --template-file cf-template.yaml --stack-name graphql-free-prod --capabilities CAPABILITY_IAM

This command uses the cf-template.yaml file to define the API resource in AWS. If the CloudFormation stack already existed, it’s updated. If the stack did not exist before, it’s created. The --capabilities CAPABILITY_IAM defines that CloudFormation needs to manage IAM resources (e.g. creating an IAM user).

Every time we want to deploy a new version of our API we need to run both the package and deploy commands, in this order. I recommend creating a script in the package.json file what will run both commands with the proper arguments.

Architecture

You possibly already managed an ECS API in AWS. This is a provisioned approach where you usually have a Load balancer, one or many EC2 instances and a SQL database running in RDS.

This is a quite common approach due to its proximity to the way we are used to think API architectures: an application running indefinitely, always ready to respond to new requests. All resources are provisioned upfront.

This approach is like running an application in our local machine, but inside a Docker container in the cloud instead of a laptop or desktop. Our API in this model would look like this:

Archicture composed of one Elastic Load Balancer in front of an EC2 instance and a RDS instance running a PostgreSQL database.

The Serverless approach looks like this:

Architecture composed of one API Gateway running in front of Lambda functions and DynamoDB as database

Very similar, right? Overall they look very alike, because the principle is the same. The big change is the fact that the API Gateway, Lambda and DynamoDB are managed by AWS!

No need to worry if our API Gateway instance is running out of memory, for example. The Lambda function instances are only provisioned when a request is received, then they process the request and stop. No resource is sitting idle awaiting on something to happen.

This poses some new challenges, like harder local testing, however advantages appear as well. In this model, we only pay for how much we use the resource and not for how long. That’s, basically, what allows us to run the API for free (up to a volume of requests).

Cost comparison

To compare the cost of each architecture we’ll consider a API that has:

  • 1 million requests/month;
  • 24/7 uptime.

Provisioned (ELB + EC2 + RDS)

  • EC2 (1x t2.micro instance) = $9.27;
  • Application Load Balancer = $22.27;
  • RDS (1x t2.micro instance) = $14.06;
  • CloudWatch Logs = $0.00 ⚠️

Total = $45.60/month

Serverless

  • Lambda = $0.00
  • DynamoDB = $0.00
  • CloudWatch Logs = $0.00 ⚠️
  • API Gateway = $3.50

Total = $3.50 (including optional API Gateway)

⚠️️ CloudWatch Logs is usually free for light loads, but watch out if you start having heavier loads as this service can become very expensive.

There’s a huge difference between the costs: $45 for provisioned vs. $3.50 for serverless.

Is it really free, then?

I promised a free API didn’t I? So, what about this total of $3.50?

The thing is that API Gateway is only paid for how much it’s used. For 3 thousands requests/month you’ll pay only 1 cent. And, if your AWS account is less than 1 year old, you pay nothing up to 1 million requests/month.

Also, the Gateway is not mandatory, because we can make the requests directly to the Lambda function, which I don’t recommend. The only circumstance that you should not have the Gateway is when you need the API to run at literally $0 cost. If you can afford a couple bucks a month to have the Gateway, go for it.

Obs.: 1x application load balancer, 1x t2.micro EC2 instance and 1x t2.micro RDS instance are free for 12 months. But, once the 12 months are done you need to pay for them and it’ll cost you the full $45 (and possibly more).

Limitations

We’ve defined the costs for our API and how to really run it for free. But there must be some limitations on the free usage of the API.

In Brazil we have the saying

“Não existe almoço grátis”

which translate to “There’s no free lunch”. If there were no limitations, AWS would probably be bankrupt by now.

AWS Free Tier offers

To understand the limitations, we must understand AWS’s Free Tier offers. There are three types:

  • Free trials: Short-term free trial offers start from the date you activate a particular service.
  • 12 months free: Free for 12 months following your initial sign-up date to AWS
  • Always free: offers that do not expire and are available to all AWS customers.

I extracted the explanations from AWS Free Tier page.

As I mentioned in the Cost comparison section, the load balancer, EC2 and RDS are 12 months free services. The API Gateway is also 12 months free.

The good thing is that Lambda and DynamoDB are always free and you’ll pay for them only if you have a certain volume.

Volume

Now that we understand AWS Free Tier, it is important to understand its limitations for each service. The always free services have limitations on Volume.

For lambda:

  • 1 millions free requests/month;
  • Up to 3.2 millions seconds of compute time per month. This is a limit of 400,000 GB/s of compute. This is measured based on how much memory is used for how long in each Lambda invocation. If your Lambda is configured with 1024 MB of memory and runs for 1 second, it spends 1 GB/s in each invocation. If it’s configured with 512 MB of memory and runs for 1 second, it spends 0.5GB/s in each invocation. 400,000 GB/s if enough to process 800,000 requests if the function is configured with 512 MB and runs for 1 second.

For DynamoDB:

  • 25 GB of storage;
  • 25 provisioned Write Capacity Units;
  • 25 Provisioned Read Capacity Units.

DynamoDB tables are provisioned by Read and Write Capacity Units. If we oversimplify it, we can assume that this is enough to process 25 read and 25 write operations per second.

The real processing capacity is based on a calculation considering size of data read and written and if the reads are with eventual or strong consistency.

I won’t dare to try to explain them here, so if you’re interested in running it with a volume of multiple requests/seconds, I advise reading DynamoDB’s documentation.

The worst thing that can happen is that your table will start throwing ProvisionedThroughputExceededException, if you exceed the provisioned capacity. No extra charges will be happen (as long as autoscaling is not enabled).

For CloudWatch:

  • 10 Custom Metric and Alarms;
  • 1 million API requests;
  • 5 GB of Log Data ingestion and 5 GB of Log Data Archive. This is how much log data you write. If your log messages have 1 kb, you can have up to 5 million log messages. Watch out, though, as this is one of the causes of high costs in CloudWatch logs.
  • 3 Dashboards with up to 50 Metrics Each per month.

For API Gateway:

  • 1 million API calls/month (free for 12 months only).

Attention ⚠️

All of the volume limitations are account-wide. If you have more than one Lambda function, API Gateway, DynamoDB table and application logging into CloudWatch, you must consider the usage of all of them summed. For example, if you have 3 Lambda functions (even if they’re not from the same API) and each is invoked 1 million times a month, the usage to be considered is 3 x 1 million requests/month = 3 million requests/month.

Benchmark

We have the API running locally and in AWS. We’ve understood the architecture, cost comparison between ECS vs Serverless approach and the limitations we have to run it for free. Now it’s time to check how well it performs. We’ll use Grafana’s K6 to run a very simple load test pointing to our API in the AWS.

With 50 virtual users querying a goal by id for 5 minutes, the API achieve the following results:

  • 100% successful requests
  • Average response time = 196.59ms
  • Maximum response time = 1.48s
  • Response time of 95% of requests is less than or equal to 307.24ms

If you want to run the performance test, install Grafana Lab’s K6, download the perfomance-k6.js file from the GitHub repo and run the following command:

$ k6 run -e GRAPHQL_ENDPOINT=<api-endpoint> -e GOAL_ID=<goal-id> performance-k6.js --vus 50 --duration 300s

Conclusion

This is everything we need to run a free GraphQL API in AWS. The article is meant to provide all steps needed to achieve this API and explain the advantages of using the Serverless approach instead of a provisioned one.

If you are not very familiar with some of the concepts or tools used here you can leave me a question in this post comments.

Thank you for taking the time to go over this solution and I hope it helped you to grow (even if just a little bit) as a Software Engineer.

I’d love to hear your feedbacks and improve based on them.

TL;DR

  • Serverless architectures in AWS are cost efficient, being possible to run a personal/small API with zero cost.
  • The API is free if:
    - It has up to 1 million requests/month and you don’t use API Gateway. I highly recommend using it though.
    - You use API Gateway and have up to 1.4 thousand requests/month.
    - It has up to 1 million requests/month, you use API Gateway and your AWS account is less than 1 year old.
  • Even if the API is not free, it’s going to be very cost efficient and you’ll pay only for what you use: 3 thousand requests with API Gateway will cost 1 cent/month, for example.
  • GraphQL is a server-side runtime and query language. Its powerful schema and query language are good reasons to look it up if you haven’t yet.
  • The architecture is composed of an API Gateway, a Lambda function and a DynamoDB table.
  • You can test the API locally using AWS’ SAM and local stack.
  • I had commit a working version of the API in this public GitHub repo. Feel free to clone or fork it!

Do you think you have what it takes to be one of us?

At WAES, we are always looking for the best developers and data engineers to help Dutch companies succeed. If you are interested in becoming a part of our team and moving to The Netherlands, look at our open positions here.

WAES publication

Our content creators constantly create new articles about software development, lifestyle, and WAES. So make sure to follow us on Medium to learn more.

Also, make sure to follow us on our social media:
LinkedInInstagramTwitterYouTube

--

--