Autoscaling Wallarm Nodes in AWS, GCP, and Azure

Wallarm
Wallarm
Published in
3 min readSep 25, 2019

Newly updated Wallarm Node images now natively support autoscaling capabilities in AWS, GCP, and Azure. Updated images are already available in cloud provider marketplaces and can rely on the native auto-scaling to adjust the number of nodes based on traffic, CPU load, and other parameters.

What is Autoscaling?

Many of our customers rely on autoscaling capabilities to horizontally scale their apps and APIs. Autoscaling mechanisms monitor your applications and automatically adjust capacity to maintain steady, predictable performance at the lowest possible cost.

Earlier releases already had the capability to automatically scale out Wallarm Nodes based on things like the load with native support of Kubernetes. Now, you can also dynamically add an additional node or remove underutilized ones using the native autoscaling mechanism of AWS, GCP, and Azure.

Native autoscaling is supported in Wallarm Node 2.12.0+. Find the images in Amazon Web Services, Google Cloud Platform, and Microsoft Azure marketplaces.

Setting Up Autoscaling (AWS Example)

Let’s take a look at setting up autoscaling in AWS.

You can scale the number of instances based on several standard load parameters, including:

  • the utilization of CPU; and
  • the amount of inbound/outbound traffic, etc.

For example, in AWS you can set up the following policy for a group of Wallarm Node instances:

If Average CPU Utilization exceeds 60% for over 5 min then add 2 more nodes.

These are the steps required to set up autoscaling in AWS.

1. Create an Autoscaling Group.

  • Create a launch template. A launch template defines the instance type to be used during the deployment of an AMI and sets up some of the general virtual machine parameters. You need to choose the AMI of Wallarm Node that supports autoscaling.
  • Create an Autoscaling Group. An Autoscaling Group is a group of instances that will scale out with the chosen scaling policy. You can configure the group size increase policy using the “Increase Group Size” parameter group as well as “Decrease Group Size”. Here are some of the variables AWS supports for autoscaling: CPU Utilization (in percentages); Network In (in bytes); and Network Out (in bytes).

2. Set up a load balancer for the Autoscaling Group.

  • Create a load balancer. Once you have configured a filter node Autoscaling Group, you need to create and configure a load balancer that distributes incoming HTTP and HTTPS connections among several filter nodes from the Autoscaling Group.
  • Set up an Autoscaling Group for using the created balancer. Configure your Autoscaling Group for using the load balancer you created earlier. This will allow the balancer to route traffic to the filter node instances that are launched in the group.

Try It Now

Detailed tutorials on how to set up autoscaling are available in Wallarm Docs for:

--

--

Wallarm
Wallarm

Adaptive Application Security for DevOps. @NGINX partner. @YCombibator S16