In the cloud space, the actual gold rush is headed toward the serverless mine. For some applications, this paradigm may bring benefits. A lot of applications, thought, just don’t fit well in it. Maybe, a low and predictable latency is a strict requirement  or maybe you have to deal with a legacy application created with ancient technologies. Even Docker may not be the best choice for some workloads as we see in a following example. In these cases, the service we are going to explore in this article can help you to optimize the costs, reducing them, and increase the resiliency of your application.
AutoScaler.cloud is a service that makes it easy to automatically scale your server fleet, using the load it actually sustains as input. It also has an auto healing function.
How auto scaling works
If you’ve ever tried to embed into application code the logic to automatically scale infrastructure, you may already have learnt, the hard way, that the simple algorithm by which new servers are added if a certain threshold is crossed, doesn’t work. From the chart on the side you can see how servers are created and destroyed as soon as the load crosses the desired load. This is undesirable and will lead to your application to perform very badly.
AutoScaler uses amore sophisticated algorithm optimized to minimize the creation and deletion of nodes.
How auto healing works
The controller performs another important action other than automatic scaling. If a server doesn’t pass the health checks that are regularly performed, a new server is added to the pool and the unhealthy one is deleted.
Create your first autoscaler
After registering to the service here and logging in, click on Create an autoscaler button in the dashboard.
After selecting DigitalOcean from the list of providers, you will be redirected to their control panel where you must grant to AutoScaler the permissions to create and delete servers. The service will only delete the droplets created by itself. After redirected to the dashboard, click on the Create an autoscaler button again and configure the pool as you wish.
The service will maintain Minimum server number, if a node will stop sending heartbeat, it will be automatically replaced by a new one.
If you enable auto scaling, you must specify Maximum server number to inform the system you don’t want it to create more than this amount; in this case you must also specify Target load which is the average system load  within all nodes in the pool you want to maintain.
Enabling auto scaling is optional. If not enabled, the service will just keep the pool at Minimum server number. Even without autoscaling, the service will bring a huge advantage to your application in terms of resiliency thanks to automatic unhealthy servers replacement.
tip: you can specify zero as Minimum server number but beware, all the servers will be deleted!
warning: as default, DO doesn’t let you to start more than 5 droplets as an anti abuse measure. Ensure to increase this limit here if you set Maximum server number above 5.
The next section let you to configure the droplets that service will start. As Image you have to specify a previously created golden image . This image must be created using DO web UI or API. Advanced users will automate golden images creation in a CI pipeline.
Example application: a pool of workers
This is one of the use cases which will benefit the most by using the service. During the day, the worker pool may have just few hours of high load. With a traditional approach you must instantiate a number of servers to accommodate the peak load. This will waste resources (and money) because you won’t need them the majority of time.
As an example, if, during 24 hours, you need 8 servers to accommodate a peak load of 12 hours total (even non consecutive), and 4 servers for remaining 12 hours, using AutoScaler will bring you a 25% cost saving by turning off unneeded droplets.
Example application: virtual machines based programming languages
If you use language or framework which works on virtual machine based runtime environments like Java (e.g. Akka) or Erlang (e.g. Elixir), deploying it with serverless or container orchestrators can be cumbersome at least . One of the problems is that these kinds of environments perform similar tasks of orchestrators (e.g. Kubernetes) and this overlapping functionalities may cause problems hard to debug (image to put a car into a bus). Putting this runtimes directly inside a golden image and using AutoScaler, will eliminate an unneeded abstraction layer and this will make your application more predictable and easier to debug, let alone to boost its resiliency given the automatic server healing.
A following article will explore how to use advanced deploy technics like blue-green deployment.
This service will eventually add support for other cloud providers in the future. If you are a provider who wishes to give your users the ability to use the service, drop me a line (email@example.com).
The author is the creator of autoscaler.cloud.