Integrating AWS Auto Scaling with Nginx Plus R17 new API

Nginx plays the busiest websites of world and is a webserver that also can be used as reverse proxy, application server and WAF

Bruno Paiuca
4 min readDec 19, 2018

Integrating AWS Auto Scaling with Nginx Plus R17 new API

If you run your application workload in AWS, you are probably looking for benefits like a large resource capacity to scale the infrastructure to handle a large quantity of requests. The Auto Scale Groups can help you to manage the elastic compute capacity based in metrics or scheduled actions. ASG can help you to handle other operational challenges like replace instances with problems, integrate with deploy solutions like Code Deploy and stuff.

When you run your application in Auto Scale, you probably need use AWS ELB ou AWS ALB, to load balance the traffic between your fleet of EC2 instances. Easily, you can use the DNS of your load balance as endpoint or CNAME for your major endpoint for your APP. This approach works fine for the most part of the use cases, but sometimes it may be you need go beyond. In some scenarios, you’ll need use more feature that AWS Services can’t handle, like Cache, Advanced Throttling, Key Value Store, Integrations by Lua scripts, Server Side Includes, gRPC2 and stuff.

Traditional Approach

Some requirements or improvements can toward you to create one top layer in front your applications that can handle things mentioned above and others using applications like Nginx Community, Nginx Plus, Varnish and stuff. You can use other ASG and ELB/ALB or Route53 to handle traffic for your Nginx fleet.

Approach using a Nginx+ Fleet in front our Application Layer — Using ProxyPass to ELB DNS Endpoint

The approach above works very fine and allow you to implement fine settings involving Cache, Throttling, Resilience, Performance, Connection Pooling, AB Test and others, solving a lack of features in AWS solution. The problem that you can suffer in this approach, is about problems with keepalive timeouts, redundant load balance, billing costs by traffic that pass between two ELBs, beside use cases of use UDP LB, Sticky sessions, Active and passive Upstreams and websocket implementations.

Other approach that you can use, it is about implementation of a integration between Nginx+ and AWS Auto Scale API to manage the Upstream nodes, and drop the ELB layer that handle traffic to the application EC2 instances.

Nginx+ and ngnx-asg-sync module integrated with AWS Auto Scale API to manage upstream nodes

In this approach the module nginx-asg-sync integrate with ASG API to manage the process to register and deregister EC2 instances based on auto scale actions to create or terminate instances. By this way you can use proxy_pass directly to the instances, reducing points of failure, and reducing the traffic ELB cost.

Step 1- AWS API Access

The Nginx EC2 Instance needs to be launched with an IAM Role attached that allow AmazonEC2ReadOnlyAccess. We just need it to integrate the module with the Auto Scale API.

Step 2 — Setup Nginx Repositories and Key access

Create your repo file in /etc/yum.repos.d/nginx-plus.repo:

[nginx-plus]
name=nginx-plus repo
baseurl=https://plus-pkgs.nginx.com/centos/7/$basearch/
sslclientcert=/etc/ssl/nginx/nginx-repo.crt
sslclientkey=/etc/ssl/nginx/nginx-repo.key
gpgcheck=1
enabled=1

Copy your certificate and private key to /etc/ssl/nginx

Step 3 - Install Nginx and nginx-asg-sync

yum install -y nginx-plus
yum install -y https://github.com/nginxinc/nginx-asg-sync/releases/download/v0.2–1/nginx-asg-sync-0.2–1.el7.x86_64.rpm

Enable the services:

systemctl enable nginx-asg-sync
systemctl enable nginx

Step 4 — Configure your App to use

upstream APP {
zone APP_BACKEND 32k;
state /var/lib/nginx/state/APP.conf;
least_conn;
}
match APP-HEALTHCHECK {
status 200;
}
server {
server_name _;
access_log /var/log/nginx/$server_name.log main;
location / {
health_check interval=10 fails=6 passes=1 uri=/healthcheck match=APP-HEALTHCHECK;
proxy_pass http://APP_BACKEND;
proxy_redirect off;
}
}

Step 5 — Enable the Nginx API

Edit file /etc/nginx/conf.d/default.conf and configure the API:

Nginx Plus R17 onwards:

location /api/ {
api write=on;
allow 127.0.0.1;
deny all;
}

Nginx R16 and previous version:

location /upstream_conf {
upstream_conf;
allow 127.0.0.1;
deny all;
}
location /status {
status;
}

Since version R13 a new API version was released so we have some differences of syntax. From R17 released last week, the old API version was deprecated and just the new API works.

Step 6— Configure the module nginx-asg-sync

Create the file /etc/nginx/aws.yaml

For Nginx Plus R17 onwards:

region: us-east-1
api_endpoint: http://127.0.0.1:8080/api
sync_interval_in_seconds: 20
upstreams:
- name: APP_BACKEND
autoscaling_group: APP_ASG_NAME
port: 80
kind: http

For Nginx Plus R16 and previous:

region: us-east-1
upstream_conf_endpoint: http://127.0.0.1:8080/upstream_conf
status_endpoint: http://127.0.0.1:8080/status
sync_interval_in_seconds: 20
upstreams:
- name: APP_BACKEND
autoscaling_group: APP_ASG_NAME
port: 80
kind: http

Obs. APP_ASG_NAME = Name of Auto Scaling related with the application.

Step 7 — Start the services

systemctl start nginx-asg-sync
systemctl start nginx

Test Load Balance and Scaling process

Now, you can check it working using the Live Activity Dashboard in 8080 port /status.html (Older versions) or /dashboard.html in the new API version. You can change the ASG increasing or decreasing the desired number of instances and you can see it be registered or deregistered in your Nginx+ Upstream.

Summary

With this implementation you can reduce the complexity of the environment, and reduce some costs about ELB/ALB.

References

https://www.nginx.com/blog/load-balancing-aws-auto-scaling-groups-nginx-plus/

--

--