The evaluation of refactoring business logic to meaningful units started from Monoliths to micro services and from micro services to functions. Monoliths are large single applications which consists of all the business logic in one place and has deployed all applications related artifacts to a one single server. When monoliths are split into separate service units or components, some drawbacks occurred such as network slowness between distributed server farms, CPU bottlenecks on dependent components, heavy weight payloads since message protocols based on XML and soap specifications. After some time all above issues were sorted out with the advancements of technology, infrastructures and then realized lots of message passing are needed to communicate between these splitted monoliths. To overcome this situation micro services architecture was introduced.
Micro services architecture has few drawbacks like complexity of network communications between different components and lots of message passing is required between different services to fulfill a business logic. Due to those facts, Micro services architecture is not optimized to utilize the bare-minimum server resources in optimal way, relying only on optimal server resources which needed for that business computation. Moreover, Smooth scaling up cannot support without doubling server resources when load fluctuates over the time due to the same facts. To overcome above issues, functions based concepts got popular. In this approach business logics are broken down into individual functions which can be hosted individually without depending on server configurations or state. API gateways are commonly used to compose functions and produce consumable APIs.
What is Serverless
Serverless is an architecture, which has evolved from microservices architecture with the essence of functional programming. In Serverless model developer doesn’t need to apprehend about servers because configuring, maintaining, or updating is none of developer’s business. This does not mean that developer doesn’t need servers to deploy their application but developers should not apprehend about them. Solutions are composed of third party services, ‘Backend as a service’ (BaaS) components as well as custom code which were written as pure functions, running on transient containers in environments like AWS Lambda.
Devops is needed in smaller scale with a reduced responsibility but developers still needs to automate the deployment, monitoring at minimum cost although there is no need to start and stop servers for the serverless business logics.
This architecture mainly consists with two main approaches:
- BaaS — Backend as a service (Example: Auth0, Firebase)
- FaaS — Function as a Service (Example: AWS Lambda hosted functional computation units)
We can use third party vendor provided backend services to fulfill business logics and we can consume their backend services as needed. In this approach, we are not maintaining differen##t home-grown microservices for those common business logics such as authentication handling instead we are relying on third party vendors to provide that functionality.
Developers still need to write some amount of business logics and wire-up logics between different functions. . Unlike traditional functional code, these logics need to be written as stateless pure functions which then can be hosted on transient containers which stays alive until fulfilling the function invocation. This provides the possibility to upload a piece of code to AWS Lambda or similar environments which should be executed for a given event. For an example, when a new data has been processed by BaaS and get the response with the payload.
Why Serverless is important
Every time a server gets a request from a client, the server executes relevant functionalities to get the response. In this situation every time the server needs to be up and running and actively listening for requests even when no one using the application. In this architecture, it’s not necessary to keep the server active to process a single request. A trigger will notify to cloud vendor to grab the code and execute relevant function in a transient container as needed when primary application receives a request (function on demand).
Due to the same reason, operational cost is reduced, not required of additional server administration/maintenance and monitoring on deployed components of micro services and cost is optimized for charge only for the computation. Also performance capabilities defined in more flexible terms which can vary with the load other than fixed hosting environment size or number of servers.
Following diagram illustrates the process of generating the response.
- The user sends a request to an address handle by a cloud provider and received by the gateway.
- Based on the message, cloud service tries to find the relevant package (function) to get the response.
- The selected package is then loaded into the docker (a transient container).
- The docker executes the function with given data and output the response to API gateway.
- The response is sent to the user.
The main advantage of this architecture is only you pay for time that it executes the function. Usually measured in fractions of seconds. When the load is high the cloud will load different instances to handle the load. If one server fails another one will start automatically without any additional configuration or without any intervinince from the consumer.
- Fast scalability with function level scaleup and scale down capabilities.
- High availability
- Efficient use of resources
- Reduced operational costs, infrastructure maintains by the vendor
- Focus on business, not on infrastructure.
- System security is outsourced, vendor maintains security.
- Continuous delivery happens when uploading a new version of the application package. This is done using an automation script.
- Micro-services friendly, for an example within micro-services developer, can use any language or multiple languages and can work concurrently.
- The cost model is startup friendly
- Significantly reduced the ‘Time to Market’ since development and maintenance effort is less
- Optimized for inconsistent user loads. Add new instances when the load is high.
- Higher latency may occur if not used in a proper configuration.
- Constraints of vendor parties. Ex: in AWS you cannot run LAMBDA more than 5 mins.
- Hidden inefficiencies when a given function may be deployed with a poor performance and unnoticed because of less operation monitoring.
- Vendor dependencies happen when we expose our business to the third party. Can happen downtimes, security and performance issues.
- Debugging difficulties happens when debugging external service. The reason is those serverless APIs are not open source to get it and debug in your local environment. Only logs are visible to the developer.
- Atomic deploys are not flexible to perform.
- Deployment, packing and versioning complexities especially when supporting Zero downtime and rollback options.
- Uncertainties because of service operations.