DevOps & Microservices. Part 4: Service Mesh and Serverless
Part 4 of the series, check part 3 first. In this post I quickly explain what service mesh and Serverless is.
Currently it is very common for organizations to have thousands of micro services running on containers and managed by Kubernetes in a fully automated and resilient environment. As the complexity increases, even using the Spring Cloud capabilities becomes complex since each service needs to deal with errors, failures, latency, health checks, etc. This logic is duplicated on every service. Kubernetes introduced health checks and automatic recovery but services still need to implement circuit breaker patterns, service discovery, API management, encryption, SSL management, ACLs, etc.
Service Mesh such Istio were introduced to overcome this problem and allow developers to focus on pure development by automating the service to service communication complexity. A service mesh is the connective tissue between your services that adds additional capabilities like traffic control, service discovery, load balancing, resilience, observability, security, and so on. A service mesh allows applications to offload these capabilities from application level libraries and allow developers to focus on differentiating business logic.
OpenShift already supports Istio. There are other alternatives to Istio but Istio is gaining popularity thanks to Google support. The basic idea behind service mesh is that each container will have a companion (side card) that will handle the service to service to service communication.
Istio provides all the necessary components for distributed tracing, monitoring and API management. It also incorporates several add ons such Prometheus, Grafana and service graphs. For hands on experience see this. Some of the things that Istio provides are: ACL, SSL management, dark launch, canary deployments, error injections, stress testing, circuit breakers, advanced routing, error management, etc.
Istio has been gained extra attention since Google’s adoption this year.
As I mentioned on one of my posts, you shouldn’t rely entirely on Istio to manage the complexity of microservices but rather design a balanced architecture where in certain cases you will use Event Sourcing + CQRS with Kafka rather than service to service calls, we will discuss this later.
As applications grew and mobile apps gained in popularity, reactive programming gained in popularity and applications became more responsive. Amazon introduced AWS Lambda in 2015 allowing containers to respond to events in real time shifting the paradigm from client-server into a cloud base event base information system. Serverless programming has gained enormous popularity in the last 3 years. Now thanks to open source project such Apache OpenWhisk which is supported by OpenShift developers can create server less applications very easily.
Serverless platforms provide APIs that allow users to run code functions (also called actions) and return the results of each function. Serverless platforms provide HTTPS endpoints to allow the developer to retrieve function results. These endpoints can be used as inputs for other functions, thereby providing a sequence (or chaining) of related functions. Packages provide event feeds and anyone can create a new package for others to use. Triggers associated with those feeds fire when an event occurs, and developers can map actions (or functions) to triggers using rules.
The advantage of Faas is that complex functions can be grouped together and managed together being reactive by nature. Each function can be written in different language, for example to calculate some value a java function can call a Python AI code snippet. OpenWhisk takes care of invoking each step and passing the results to each step in our pipeline. In addition, each Action can be independently updated with out the need to touch every other step in the sequence. Triggers and rules can be added to perform batch tasks.
The main advantages of Faas is the reduced cost, high scalability, productivity, easy to develop and its fit to the DevOps culture. The disadvantages are that, since they are not long running processed the performance could be affected since you need to start the micro services. They are also harder to monitor and to secure than regular micro services.
I hope you enjoyed the third part. Feel free to leave a comment or share this post. Follow me for future post regarding this topic.