Microservices architecture, implementation and monitoring with Spring Cloud, Netflix OSS and Dockers

What, Why, Concepts, and Architecture.

Madhu Pathy
May 30, 2018 · 11 min read

As Microservice Architecture gained huge popularity the last few years, if you are not using it already, you are most likely to work on such projects soon.

The main objective of the micro-services implementation is to split up the application as separate service for each core and API service functionality and it should be deployed independently on cloud. Netflix was one of the first companies to adopt microservices, and they have built very interesting blocks for managing and implementing a microservices platform.


Some cool quotes:

“Make Jar, not War” philosophy of @Pivotal/@Spring Central's Josh Long

“Making decisions in system design is all about trade-offs, and microservice architectures give us lots of trade-offs to make” - Sam Newman

“The hardest choices require the strongest wills.” - Thanos :)


Now that you know this blog is the real deal and I mean business. Lets jump to the why?

Monolithic architectures: challenges

  • As application grows and code base grows, it overloads and reduced developer productivity.

Microservices: advantages

  • Decentralized, Independent, Do one thing well, Polyglot, Black box.

Microservices: Challenges

Just because something is all the rage around the industry, doesn’t mean it has no challenges. Here’s a list of some potential pain areas we realized.

•Initial developing distributed systems can be complex.

•Testing a microservices-based application can be tricky as compared to monolithic approach.

  • Deploying microservices can be complex initially.

general micro-services architecture

Introduction

Lets get a simple introduction to technologies involved:

  • Spring Config Server → babysitter for configurations of all APIs

Config Server — Configuration Service

The configuration service is a vital component of any microservices architecture. Based on the twelve-factor app methodology, configurations for your microservice applications should be stored in the environment and not in the project. The configuration service is essential because it handles the configurations for all of the services through a simple point-to-point service call to retrieve those configurations. This babysits the configurations, keeping all configuration at one place and common configurations separate.

When the configuration service starts up, it will reference the path to those configuration files and begin to serve them up to the microservices that request those configurations. Each microservice can have their configuration file configured to the specifics of the environment that it is running in. In doing this, the configuration is both externalized and centralized in one place that can be version-controlled and revised without having to restart a service to change a configuration. As shown in the screenshot below, the we have configs of each APIs with the API name. Basically, this is application.yaml for each API. We can keep all common configurations like hystrix and other in application.yaml file which would be common for all APIs.

In order to build Config Server, we have to import the dependency spring-cloud-starter-config-server in pom.xml and enable it in application main class using @EnableConfigServer annotation.

For any API to access config server, we can annotate with @EnableConfigClient and have spring-cloud-starter-config dependency in pom.xml. It will try to contact a config server on spring.cloud.config.uri. Here is an example of configuration for any API to get configuration from config server:

spring:
cloud:
config:
uri: http://{HOST-NAME}:{CONFIG-SERVER-PORT}

The pom file looks like this:

<?xml version=”1.0" encoding=”UTF-8"?>
<project xmlns=”http://maven.apache.org/POM/4.0.0" xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=”http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.microservices</groupId>
<artifactId>config-server-api</artifactId>
<version>1.0.1</version>
<packaging>jar</packaging>
<name>config-server-api</name>
<description>poc for spring cloud microservices</description>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.2.RELEASE</version>
<relativePath/> <! — lookup parent from repository -->
</parent>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-config-server</artifactId>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Dalston.RC1</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
<repositories>
<repository>
<id>spring-milestones</id>
<name>Spring Milestones</name>
<url>https://repo.spring.io/milestone</url>
<snapshots>
<enabled>false</enabled>
</snapshots>
</repository>
</repositories>
</project>

Eureka Server — Registration Service

In order to build Registry Server, we have to import the dependency spring-cloud-starter-eureka-server in pom.xml and enable it in application main class using @EnableEurekaServer annotation. We can also have Eureka pick port on runtime as other microservices, but its a good practice to have a known port for Eureka. Here is configuration of Eureka server in application.yml file.

server:
port: ${PORT:8761}
eureka:
instance:
hostname: localhost
client:
registerWithEureka: false
fetchRegistry: false
server:
enableSelfPreservation: false

As shown in below screenshot of EUREKA DASHBOARD, you can see service registry has discovered multiple instances/microservices/apis. On click of the link in status column, we can access the meta data information of each instance. Also, we can check the rest APIs using this url. You can also see the status and if any service is on self preservation mode.


Eureka Client — Discovery

As long as Spring Cloud Netflix and Eureka Core are on the classpath any Spring Boot application with @EnableDiscoveryClient and spring-cloud-starter-eureka dependency in pom.xml will try to contact a Eureka server on http://localhost:8761(the default value of eureka.client.serviceUrl.defaultZone)

Here is an example of configuration for any API so that Registry Server can discover the API:

eureka:
enabled: true
client:
serviceUrl:
defaultZone: http://{HOST-NAME}:8761/eureka/
registerWithEureka: true
fetchRegistry: true
instance:
preferIpAddress: true

Netflix Zuul — API Gateway

Zuul is our gatekeeper to the outside world, not allowing any unauthorized external requests pass through. Zuul also provides a well known entry point to the microservices in the system landscape. Using dynamically allocated ports is convenient to avoid port conflicts and to minimize administration but it makes it of course harder for any given service consumer.

In order to build Zuul gateway, the microservice that handles the UI need to be enabled with zuul config. We have to import the dependency spring-cloud-starter-zuul in pom.xml and enable it in application main class using @EnableZuulProxy annotation.

Here is configuration for zuul gateway in application.yml file. We don’t have to mention the APIs ports or host names as we have already registered the APIs. The gateways will pick the route from Registry Server.

#-- ZUUL CONFIGURATION TO CONNECT TO BACKEND SERVICES --#zuul:
retryable: true
routes:
product-api:
path: '/product-api/** '
serviceId: product-api
order-api:
path: '/order-api/** '
serviceId: order-api
employee-api:
path: '/employee-api/** '
serviceId: employee-api

By default Zuul set up a route to every service it can find in Eureka. With the following configuration we have limited the routes to only allow calls to the composite product service

zuul:
ignoredServices: "*"

Ribbon — Load Balancing

Zuul uses Ribbon to lookup available services and routes the external request to an appropriate service instance.

ribbon:
MaxAutoRetries: 1
MaxAutoRetriesNextServer: 2
OkToRetryOnAllOperations: true

By default Ribbon uses round robbin to lookup for services availaIable. If we want to handle load balancing ourselves, we can add the below for example for product-api:

ribbon:
eureka:
enabled: false
listOfServers: localhost:8000,localhost:9092,localhost:9999
serverListRefreshInterval:15000

Monitoring Server

There are multiple components to monitoring server. Hystrix, Turbine and Hystrix-Dashboard.

In order to build MONITORING-SERVER-API we have to import the dependency spring-cloud-starter-hystrix, spring-cloud-starter-hystrix-dashboard and spring-cloud-starter-turbine in pom.xml and enable it in application main class using @EnableTurbine and @EnableHystrixDashboard annotation.

Hystrix

It implements the circuit breaker pattern. In a microservice architecture it is common to have multiple layers of service calls. With Hystrix we can handle each API call and its failover routing. One of the main benefits of Hystrix is the set of metrics it gathers about each HystrixCommand.

Hystrix Dashboard

The Hystrix Dashboard displays the health of each circuit breaker in an efficient manner.

Turbine

Looking at an individual instances Hystrix data is not very useful in terms of the overall health of the system. Turbine is an application that aggregates all of the relevant /hystrix.stream endpoints into a combined /turbine.stream for use in the Hystrix Dashboard. Individual instances are located via Eureka.

Here is its application.yaml configuration looks like for MONITORING-SERVER-API:

server:
port: 8070
turbine:
appConfig: product-api,order-api,employee-api

How to enable Circuit check for each API call?

For any microservice to enable hystrix circuit breaker. We need to import the dependency spring-cloud-starter-hystrix and for every REST API we enable using the annotation @EnableHystrix and @HystrixCommand as show below. We can add custom circutim break path in @HystrixCommand

Here is its application.yaml configuration looks like for microservices to register with monitoring server:

hystrix:
command:
default:
circuitBreaker:
enabled: false
execution:
isolation:
thread:
timeoutInMillisecounds: 210000
timeout:
enabled: false

How to Monitor health?

Once all APIs are configured. We can see the MONITORING-SERVER-API as one of the discovered services in EUREKA DASHBOARD.

We can set port to 0 in application.yaml for MONITORING_SERVER-API and have it dynamically pick port. But its good practice to have fixed port for Registry Server and Monitoring Server. We set port 8070. So we can either click on the link of MONITORING-SERVER-API from EUREKA DASHBOARD and append hystrix or directly click the link:

http://{host-name}:8070/hystrix

and the below dashboard with show up.

Hystrix Dashboard

As mentioned in above screenshot we need to enter the cluster details, this can also come handy in monitoring queues and topics. We can directly use the below link:

http://{HOST-NAME}:8070/hystrix/monitor?stream=http://{HOST-NAME}:8070/turbine.stream

This will help to access the stream of all REST APIs in usage in HYSTRIX DASHBOARD. The REST APIs in usage are shown with its complete health details. Find below a sample on how to monitor the details from the dashboard.


How to update configurations for cloud deployment?

Its good to have known port for Eureka, so that we can easily check the status of various APIs. There are multiple ways to handle this for cloud deployment. We can hard code the IP of the instance where eureka is deployed and separate it with profile names.

eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
— -
eureka:
client:
serviceUrl:
defaultZone: http://172.0.0.1:8761/eureka/

The bit of configuration under the — — delimiter is for when the application is run under the cloud Spring profile. It’s easy to set a profile using the SPRING_PROFILES_ACTIVE environment variable. We can configure Cloud Foundry environment variables in your manifest.yml or on Cloud Foundry Lattice or on the Docker file. If hostname can’t be determined by Java, then IP address is sent to Eureka. The only explicit way of setting hostname is by using eureka.instance.hostname. We can set your hostname at the runtime using environment variable, for example eureka.instance.hostname=${HOST_NAME}

eureka:
instance:
serviceUrl:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

How to add more APIs

Step 1: Link to configuration service

Step 2: Link to registry service

Step 3: Link to api gateway

Step 4: Estimate bandwidth and number of containers required for load balancing and update ribbon config

Step 5: Link to hystrix service for circuit breaker.


Dockers

Docker is a container management toolkit allowing users to publish container images and consume those published by others. A Docker image is a recipe for running a containerized process.

Lets dockerize the above learned APIs. In API we add a Dockerfile, as shown in below example for config-server-api.

Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Here is a sample Dockerfile we can use for all our APIs

FROM java:java8
MAINTAINER yourName
#The project JAR file is added to the container as app.jar and then #executed in the ENTRYPOINT
ADD /target/config-server-api.jar app.jar
#RUN command to “touch” the jar file so that it has a file #modification time
RUN bash -c ‘touch /app.jar’
# To reduce Tomcat startup time we added a system property pointing # to “/dev/urandom” as a source of entropy.
ENTRYPOINT [“java”,”-Djava.security.egd=file:/dev/./urandom”,”-jar”,”/app.jar”]
#on which port our application is listing
EXPOSE 8888

So Docker commands can build an image from the API with the above Dockerfile.

Docker doesn’t run directly on Mac or Windows systems . Instead, to run Docker containers, you need to start up a Linux Virtual Machine using VirtualBox, then run the Docker containers inside this virtual machine. Fortunately, the vast majority of this is managed by Docker Toolbox .

Here are some basic commands to build docker image and push it to docker hub, the repository from where it can be pulled in any cloud server to deploy.

//to build the jar in target folder from the maven built script
mvn clean install
//build docker image locally
docker build -t {dockerHubRepo}:{imageName} .
//push docker image to your hub
docker push {dockerHubRepo}:{imageName}

Once the docker images are built. We can use docker-compose or docker-swarm to orchestrate and deploy various API containers. We can also use kubernetes. For simplicity, we will use docker-compose.

With docker-compose, we define a multi-container application in a single file, then spin the applications in a single command which does everything that needs to be done to get it running. Here is a sample docker-compose that we use for this containers:

version: ‘3’
services:
config-server-api:
image: {dockerHubRepo}:{imageName}
ports:
- 8888:8888
registry-server-api:
image: {dockerHubRepo}:{imageName}
ports:
- 8761:8761
links:
- config-server-api
monitoring-server-api:
image: {dockerHubRepo}:{imageName}
ports:
- 8070:8070
links:
- config-server-api
- registry-server-api
product-api:
image: {dockerHubRepo}:{imageName}
links:
- config-server-api
- registry-server-api
order-api:
image: {dockerHubRepo}:{imageName}
links:
- config-server-api
- registry-server-api
employee-api:
image: {dockerHubRepo}:{imageName}
links:
- config-server-api
- registry-server-api
fronend-api:
image: {dockerHubRepo}:{imageName}
ports:
- 8080:8080
links:
- config-server-api
- registry-server-api

For more information about the Compose file, see the Compose file reference.

Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.

Run docker-compose using the command docker-compose up -d and Compose starts and runs your entire app.


Thank you for reading! I hope you enjoyed it.

Madhu Pathy

Written by

Software Engineer Full Stack | Technology Enthusiast | Problem Solver