Developing Spring Cloud micro services with a central environment while avoiding service collisions

Dan Erez
5 min readFeb 4, 2019

--

How can developers work simultaneously on the same central spring cloud environment and still not interrupt each other?

Developing software based on micro services architecture is extremely easy when you use spring boot and spring cloud. Just throw in a few lines of code and you can have a micro service up and running. But how do you develop a real world application in such an environment? In theory each micro service is isolated and can be developed alone, but in practice this is often not the case. To develop and test your service in the context of the application that uses it requires more than just your micro service to be up and running. So, how can one conveniently develop in a multi micro services environment?

Well, if you only need two or three services then you can run them all locally, so setting up such an environment is not a big deal. But, if your application is composed from dozens of services (a common case for large apps) then starting such an environment, keeping it up to date, etc. can turn into a real headache. Going to the other extreme and run only one micro service locally and the rest on some central server(s) is also a problem — if, for example, I’m developing the ‘MyService’ micro service, and the central environment also has a ‘MyService’ service running (all developers are using this environment so it has all services running there) — which ‘MyService’ will be used (as the common discovery service holds reference to both)? When numerous developers are working with such an environment the problem gets worse, of course.

We have found an elegant way to enjoy both worlds — each developer is running locally only the service or two that he or she is currently working on while all other services run on some central environment, and we manage to avoid collisions and confusions between the instances of that service!

How does this magic happen? Well, the root of our initial problem is the fact that the service a developer is working on and its matching instance on the central environment (and on other developers’ machine) are registered by the same name on the discovery service (we use Eureka, by the way). What if each instance would register itself by a different name and still be usable by any other service that needs it? Well, it is possible! but it’s a bit tricky. These are the needed steps:

1. For each service (or in one infrastructure jar all services are using) we define a @Configuration bean (RemoteEurekaConfig) to tweak the registration to Eureka. In this class we return a EurekaInstanceConfigBean that overrides the superclass behavior by adding the host name to the registered service. This way ‘MyService’ will be registered as ‘MyHostName.MyService’ so me and my fellow developers each has a unique name for this service, allowing us to work on it simultaneously (and not colliding with the ‘MyService’ instance on the central environment, so other can use it when they develop other services). We determine if we want to have this behavior or not by setting an application property to true (named devDiscovery in my example). You can also use the current profile (dev/prod) or any other desired flag to decide whether your service should change the default discovery service registration or not. In addition, you can set newAppName to whatever unique value you desire (developer name, version you work on, etc.) as long as it is unique among developers and meaningful enough for you.

Listing 1 — RemoteEurekaConfig

@Value(“${dev.discovery:false}”)

private Boolean devDiscovery;

@Bean

@Autowired

@Profile(“development”)

public EurekaInstanceConfigBean eurekaInstanceConfigBean(final InetUtils inetUtils) {

String newAppName = getHostname() + “.” + appName;

config = new EurekaInstanceConfigBean(inetUtils) {

@Override

public void setEnvironment(Environment environment) {

super.setEnvironment(environment);

if (devDiscovery != null && devDiscovery == true) {

setAppname(newAppName);

setVirtualHostName(newAppName);

setSecureVirtualHostName(newAppName);

}

}

};

config.setNonSecurePort(port);

config.setIpAddress(getHostAddress());

config.getMetadataMap().put(“instanceId”, config.getHostname() + “:” + config.getAppname() + “:” + port);

return config;

}

2. Now, the fact that our service is registered on the remote Eureka with a unique name is not enough, since our Gateway will still route UI (or other) requests to instances of ‘MyService’ and not ‘MyHostName.MyService’ since this are the routings defined for the Gateway. Yes, we can modify these routing in the application.properties file whenever we run a service locally, but that can be error prone and tedious. We can do better — We can handle this dynamically by defining a new bean, DynamicRouting, which at initialization time will go over all registered services, and update the local routes to the services that run locally. How does it know which services run locally? Easy — such a service will have our unique prefix, of course :). We should iterate all registered services since we might have more than one service running locally. Of course that with this solution locally running services should be started before the Gateway. If a more dynamic behavior is required, we can apply this logic every X seconds to always be up to date (although I find it an overkill in most cases).

Listing 2 — DynamicRouting

@Autowired

private ZuulProperties zuulProperties;

@Autowired

DiscoveryClient discoveryClient

@PostConstruct

public void init() {

// Get all services from Eureka

List<String> allServices = discoveryClient.getServices();

String prefix = getHostname()+”.”;

for(String service : allServices) {

// If a service starts with my designated prefix, replace the original route to it

if (service.startsWith(prefix)) {

String originalService = service.substring(service.indexOf(“.”)+1);

for(ZuulProperties.ZuulRoute route : zuulProperties.getRoutes().values())

{

if (route.getServiceId().equals(originalService)) {

// Change original route to ‘my’ service id

route.setServiceId(service);

}

}

}

}

}

3. We are almost done! One last thing — in case you call services using REST directly from other services and not through the Gateway you’ll have to take care of that too. If you use Spring’s RestTemplate, for example, you’ll have to wrap it and apply the same logic as above, meaning:

a. Decide if this is a service call or a call to an actual URL (and do nothing for the latter). For example a service call will look like http://MyService/sth/1

b. Check if the service call is a call to a locally running service by querying Eureka as we did above and check the prefixes.

c. If so, alter the host part of the URL to the locally running service name as we did before. For example http://myHostName.MyService/sth/1 and the local service shall be called!

That’s it! you can now have a system with hundreds of micro service happily running on one central environment, and developers can develop while running only one service locally, saving resources and time while always being in sync automatically.

Two things to remember:

- The relevant beans shown here should be annotation with @Profile(“development”) and should not be active outside development, to avoid confusion.

- Since we write full stack in my workplace we always have the Gateway running locally. If you only need the backend you do not have to run the Gateway locally, just use Swagger or Postman (or similar) to call the service APIs. In case you do want to use the application’s UI and don’t want to run the Gateway locally, this can also be done by adding some info to the application’s URL (the local services prefix, and the generic names of the locally running services). Then the UI can easily replace REST calls addresses to the local service names (as we did in the server) and they will be routed to the developer’s machine.

--

--