HTTPS and Caching support for Speedy
Web-acceleration is inherent part of any modern WS (Web Service) deployment. Though Speedy supports both HTTP and HTTPS out of the box, lots of deployments use plain-text HTTP behind a reverse-proxy to keep the network and total compute cost low. In this article we offer a step-by-step guide to setup a reverse-proxy and web-accelerator along with Speedy.
Prelude
Out of all the software we had tried during our benchmarking internally we found Nginx and Varnish to be the champions in their area. This isn’t an article debating which webserver/reverse-proxy or caching software is better. However, we recommend using Nginx and Varnish along with Speedy, further, you as a developer can emulate this with any software you feel is great. Our choice of Nginx is based on its legendary scaling and reliability, and Varnish out shines every other software with the support of VCL programming and VMOD if need be. That being said, a good software is one with which you have lots of experience with!
Deployment Architecture
Following picture depicts the deployment architecture.
Let’s go in the reverse order (right → left) over how to setup each stack of software. Speedy by default uses 3023 as its preferred port, this can be overridden with -port
parameter.
Whoami?
Throughout this article we assume that you are on a linux
box and you have logged in as root
or as one with sudo
privilege. Unfortunately we never attempted to setup speedy
on any other OS (e.g. Mac OSX or Windows). This is not a blatant oversight, it is just that we are comfortable with linux
and especially with Ubuntu Server
. It gets the job done. whoami
is a unix
command which shows who are you! as in your username.
Getting free certificate with LetsEncrypt
First thing we need is a certificate to secure all the incoming connections to speedy
. If you are an enterprise or have a certificate from a valid CA, then skip this section. Incase you want to save some money with certificates then follow the guide to grab one from LetsEncrypt. LetsEncrypt, continues to offer SSL certificates for $0 (yup that’s totally free). You can grab a free certificate for your registered domain server in under 10 minutes.
Pre-requisite is, you need to have a domain (e.g. www.example.com) and you have access to your DNS console aka domain console. Consider, we would like to host api.example.com
as webservices on a VM, first configure the IP address (if you are unsure, run, hostname -i
on your VM) of your VM on which we would run the server as a DNS entry. On your DNS console add a “A” record for api.example.com
against IP address
of the VM (preferably, an elastic IP or floating IP). This will ensure that any requests pointed to api.example.com
would reach the server. Here is a link to google domain configuration; adding A record.
Now execute the following commands one by one to get certbot
installed on your VM where we are going to host the webserver aka speedy
in reality nginx
.
apt install snapd
snap install core
snap refresh core
snap install --classic certbot
ln -s /snap/bin/certbot /usr/bin/certbot
Now execute the following command to grab a free certificate for the domain.
# certbot certonly --standalone>>>> You will be asked several questions answer them carefully <<<<
>>>> Your typical output will be something similar to this <<<<
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Please enter the domain name(s) you would like on your certificate (comma and/or space separated) (Enter 'c' to cancel): api.example.comRequesting a certificate for api.example.com
Successfully received certificate.Certificate is saved at: /etc/letsencrypt/live/api.example.com/fullchain.pemKey is saved at: /etc/letsencrypt/live/api.example.com/privkey.pemThis certificate expires on 2022-07-23.These files will be updated when the certificate renews.Certbot has set up a scheduled task to automatically renew this certificate in the background.
You are all set with your SSL certificate now!
Install Varnish
Depending on the platform you are on, you need to verify the varnish
version available with the repository (mainly for Linux). We recommend that you run varnish v6.5
as of this article. It takes only few minutes to setup, configure and run varnish; if you have large fleet of servers and intend to run Speedy on all of them, then, you might want to consider writing Ansible or Chef playbook.
yum install -y epel-release
yum install pygpgme yum-utils
Edit the repo file to download varnish v6.5
vi /etc/yum.repos.d/varnishcache_varnish65.repo>>> Copy + Paste the following to repo file <<<[varnishcache_varnish65]
name=varnishcache_varnish65
baseurl=https://packagecloud.io/varnishcache/varnish65/el/7/$basearch
repo_gpgcheck=1
gpgcheck=0
enabled=1
gpgkey=https://packagecloud.io/varnishcache/varnish65/gpgkey
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300[varnishcache_varnish65-source]
name=varnishcache_varnish65-source
baseurl=https://packagecloud.io/varnishcache/varnish65/el/7/SRPMS
repo_gpgcheck=1
gpgcheck=0
enabled=1
gpgkey=https://packagecloud.io/varnishcache/varnish65/gpgkey
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300>>>> EOF <<<<
Install varnish
with the following commands
yum -q makecache -y --disablerepo='*' --enablerepo='varnishcache_varnish65'yum install -y varnish
systemctl enable varnish
Configure varnish
to forward all the incoming requests to speedy
. Edit the file default.vcl
located at /etc/varnish
using your favourite editor, and modify the following lines in it accordingly.
vcl 4.1;# Host and port where Speedy is running. It is highly discouraged
# to run Varnish and Speedy on different nodes, as it will incurr
# additional network hop, degrading the performance
backend default {
.host = "127.0.0.1";
.port = "3023";
}
Now for the last step; configuring varnish
memory allocation and port mapping. Follow the below steps to do both of them. Comments are inline with the configuration file.
vi /lib/systemd/system/varnish.service# Edit the Varnish service and modify the following data
# a. Port number on which Varnish must listen for requests
# b. Memory to be consumed by Varnish for caching the data
#
# Look for the following line in the service file
ExecStart=/usr/sbin/varnishd \
-a :6040 \ # Port number is on this line
-a localhost:6041,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,1g # Memory to be allocated is on this line
Install Nginx
Follow the below command steps to install and configure nginx
on your VM.
apt update && apt upgrade
apt install nginxunlink /etc/nginx/sites-enabled/default
We need to configure nginx
as HTTPS termination node and reverse-proxy to varnish
. To do so, edit/create reverse-proxy.conf
under /etc/nginx/sites-available/
with your favourite editor and input the following into it.
server {
listen 443 ssl;
server_name <fqdn-of-your-server>; # e.g. api.example.com ssl_certificate <path/to/fullchain.pem>;
ssl_certificate_key <path/to/privkey.pem>; include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; access_log /var/log/nginx/rp-access.log;
error_log /var/log/nginx/rp-error.log; location / {
proxy_pass http://127.0.0.1:6040;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
}
Now go ahead the link the reverse-proxy configuration to sites-enabled.
ln -s /etc/nginx/sites-available/reverse-proxy.conf /etc/nginx/sites-enabled/reverse-proxy.confservice nginx configtest
service nginx restart
systemctl status nginx
You should see nginx
working fine by now! If you run into any issues, carefully read the article to make sure you followed every line to the dot.
Need more!
In case you need more help on this, reach out to our support via email, support@getspeedy.app, we are more than happy to help.