Moving applications to the cloud is a desirable choice for companies looking for elasticity, resilience, and speed while optimizing costs.
Let’s say you have considered your goals and realized that the cloud might help your business. You decide to build future systems for the cloud, but you also want to start migrating your existing traditional applications. It could be tempting to move them the way they are now, an approach called “lift and shift”. However, you would not take advantage of the cloud environment in that way. A better approach would be going cloud native.
Spring is a popular Java framework to build any kind of application thanks to a vast catalog of libraries and tools. This article is for software engineers and architects working with Spring and interested in exploring how to build cloud native applications with it.
I’ll start by defining what cloud native means and which are the main properties of this approach. Then, I’ll guide you through several steps of the migration process, highlighting what you should change in your Spring application, which tools you can use, and the reasoning behind certain choices. In particular, I’ll cover how to make an application self-contained by using an embedded web server and JAR packaging, how to use externalized configuration, and how to improve its observability in terms of logs, metrics, and health-checks.
There is more to a cloud native app, but this article is focused on a few selected aspects. In Systematic, I took part in migrating our traditional applications towards a cloud native path, so I’ll share some insights about which challenges we had and how we solved them.
Defining cloud native
Cloud native is one of those buzzwords that populate the application development field nowadays. Cloud native applications are distributed systems that leverage the cloud characteristics, but what does that mean?
Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.
That is the definition provided by the Cloud Native Computing Foundation (CNCF), from which I identify three main groups of information I like to call The Three P’s of Cloud Native:
- Place. Cloud native applications run in dynamic environments such as public, private, and hybrid clouds.
- Properties. Cloud native applications are scalable, loosely coupled, resilient, manageable, and observable.
- Practices. Cloud native development is combined with robust automation to make high-impact changes frequently and predictably with minimal toil.
The cloud is the place where cloud native applications don’t only run, but thrive, taking advantage of the characteristics of the environment like elasticity and on-demand access to computing resources.
Cloud native development is usually combined with practices that enable speed, predictability, and stability. Automation, continuous delivery processes, and a DevOps mindset contribute to the overall success of “going cloud native”.
In this article, I’ll focus on the application development part, aimed at building systems with specific properties.
- Scalability. Cloud native apps dynamically scale upon increasing or decreasing workloads.
- Loose coupling. Cloud native apps are made up of parts that have as little knowledge of each other as possible.
- Resilience. Cloud native apps keep providing a level of service in the face of adversities.
- Observability. Cloud native apps provide relevant outputs from which it’s possible to infer the system’s internal state.
- Manageability. Cloud native apps can be controlled and adapted from the outside easily and efficiently.
With these properties in mind, I’ll guide you through three key aspects of migrating a traditional Spring application to cloud native.
Embedded web servers and JARs full of wonders
Traditionally, in the Java ecosystem, web applications are packaged as WAR artifacts and deployed in servers like Tomcat, Wildfly, or Jetty. This approach has a few drawbacks. For example, to optimize costs (application servers are not cheap to maintain), multiple applications would be deployed on the same server, creating coupling among them and preventing them from evolving independently. The very fact that a server is required in the environment where the application is deployed results in a hard dependency for the app, limiting its portability.
Instead, cloud native applications are self-contained and don’t constrain where they are deployed, except for the runtime environment. The first main difference is that they are packaged as JAR artifacts rather than WAR: standalone, regular JARs that only need a JVM to run. Josh Long, a Spring developer advocate, always says: “Make JAR, not WAR”. A cloud native app would still need a web server, though. The solution is to embed the web server inside the application itself to make it fully self-contained.
From WARs to standalone JARs
If you have a traditional Spring application packaged as WAR and deployed on an external web server, it’s time to change. Spring Boot provides you with all you need to do so. First, it supports both WAR and JAR packaging, making the migration smoother. Second, it comes bundled with support for an embedded web server. By default, it configures a Tomcat instance automatically, but you can easily swap it with Undertow (a better choice if you come from Wildfly), Jetty, or Netty.
For a Servlet-based server like Tomcat, add a dependency on Spring Web MVC in your
build.gradle file (or
For a reactive server like Netty, add a dependency on Spring WebFlux.
One of the convenient features of Spring Boot is dependency management: using starter dependencies like the previous ones makes it easier to manage versions for all related libraries.
In our project at Systematic, we used to have Spring applications packaged as WARs and deployed on an external Tomcat. By doing the changes I’ve just discussed, we got several benefits. For example, we took back control over the web server that was previously managed by another team. Now, we can configure the embedded server individually and independently for each application. That is a great achievement!
Customizing the embedded web server
In a traditional Spring application, you would configure a server like Tomcat in files such as
context.xml. On the other hand, Spring Boot provides you with a vast and convenient collection of properties you can use to configure the embedded web server (in your
application.properties files). For more advanced customizations, you can define a
WebServerFactoryCustomizer bean. One of the most common scenarios for using such a bean is enabling HTTPS and redirecting all the HTTP traffic to the secure connection.
For example, you can configure the server port and the Tomcat thread pool in
application.yml in this way:
After moving to an embedded web server solution, we didn’t have to manage and support an extra component anymore, including coordination with the team responsible for the server and discussions to change the server configuration in a compatible way with the other applications deployed in it. Our Spring Boot applications are now self-contained, making them portable across any environment where there is a JVM, and ensuring more reliable and reproducible tests across environments.
Furthermore, handling Spring Boot applications as JARs makes it straightforward to containerize them with Docker. That’s what we did. Spring Boot 2.3 makes it even easier by providing a built-in functionality to package applications as Docker images.
The Twelve-Factor Methodology, a good starting point to build cloud native applications, defines configuration as everything that is likely to change between deployments. For example:
- resource handles to databases, messaging systems, and cache stores;
- URLs to backing services like REST APIs;
- credentials to access data stores and services;
- feature flags.
Some traditional applications keep the configuration in the same codebase as the application, but that’s not ideal. Since the configuration represents what varies between deployments, chances are that you would end up storing tons of different values for each environment and forcing yourself to make a new build from the codebase every time a configuration value changes. Furthermore, credentials should never be stored in plain text.
Should you make your codebase publicly available now, would you risk to expose any sensitive data? If the answer is yes, then you should redesign how you handle configuration.
Configuration in the environment
Spring lets you use properties to configure your application. They are key/value pairs that can originate from different sources. You can leverage the Spring
Environment abstraction to access from a single interface all the properties of the environment where the application is deployed, including JVM properties and environment variables. On top of that, Spring Boot supports even more property sources, each with a different priority, and defines mechanisms to overwrite configuration per deployment.
What you want to do is defining sensible default values for development in your
application.properties (like I showed in the previous section). Then you can leverage Spring Boot to change the configuration depending on the environment. For example, in case you wanted to change the value for the
server.port property, you could use the JVM system variable
-Dserver.port which takes precedence over property files.
The Twelve-Factor Methodology recommends storing configuration as environment variables. Spring Boot supports them too. Continuing the same example, you could define a
SERVER_PORT environment variable, and Spring Boot would recognize it and map it to the
In the end, Spring Boot will use this precedence list to infer the value of the
1. JVM system properties (
2. Environment variables (
3. Property files (
In our project, we have extensively used environment variables to configure applications, but that’s not enough. The methodology doesn’t consider some aspects of configuration. For example, if you want to define a config in the environment, you first need to store it somewhere, possibly tracked in a versioning control system for traceability and auditing. A separate codebase sounds fine, but what about credentials? Spring properties, by default, do not support encryption. Finally, how to support changing configuration at runtime without having to restart the application?
Configuration in a centralized server
Spring Cloud Config addresses all the concerns described previously through centralizing configuration. Multiple applications would fetch their configuration from a central server that can use different strategies to store config properties. For example, you can use a Git repository to store properties for your many applications in a structured way and use HashiCorp Vault next to it to store secrets. Spring Cloud Config would gather properties from the different sources and serve them to the applications.
It also supports changing configuration at runtime, allowing you to send a
RefreshScopeRefreshedEvent event that can be listened on by your application beans and used to trigger their refresh to load the new configuration without restarting the whole application.
Configuration in Kubernetes
If you deploy your applications on Kubernetes, you can take advantage of its built-in functionality to manage configuration (with
ConfigMaps), credentials (with
Secrets), and automatic application refresh on configuration change. You can even use Spring Cloud Kubernetes to better integrate the two systems and catch the refresh events sent by Kubernetes, just like you would do with the configuration server.
Observability is about inferring the application state from its output and is not something new to the cloud. Aspects like logging, monitoring, and alerting are essential tenets of any application infrastructure, but we should rethink them for cloud native apps. Since the goal is to deploy them in the cloud, things change compared to deploying them to a few virtual machines managed on-prem with great care. Treat apps like space probes. That’s the very eloquent recommendation given by Kevin Hoffman in his book “Beyond the Twelve-Factor App”. What kind of telemetry would you need to control and monitor your applications remotely?
Traditional applications are usually configured to log on files, involving settings for log storage, rotation, file names, and sizes. Our Spring applications were configured precisely like that, using Log4J to define handlers that would log on files according to specific constraints. On the other hand, cloud native apps are not concerned with how logs are stored, processed, or aggregated.
Logs should be handled as streams of events and redirected to the standard output. The responsibility of collecting and storing the logs move from the applications to the platform where they run. Then, there is some external tool taking care of log aggregation. In our project, we use Fluentd as part of the EFK stack (Elastic, Fluentd, Kibana).
Metrics are another important aspect of observability since they provide information about different aspects of running applications. Cloud native apps are designed to provide any data relevant for verifying its internal state and behavior. In a traditional scenario, you would deploy libraries next to the application on the same server. That’s what we were doing: in our Tomcat server, we used to deploy our applications plus tools like Jolokia. As I stressed earlier, cloud native apps should be self-contained, so you don’t want to depend on those other deployments.
We solved that problem with Spring Boot Actuator, a powerful add-on to Spring Boot that provides exciting features to make it production-ready. Adding Prometheus to our applications involved adding a dependency to
micrometer-registry-prometheus, enabling the
/actuator/prometheus endpoint over HTTP with the
management.endpoint.prometheus.enabled property, and that was it. The default configuration adds already lots of value to your project. You can still customize it, for example, by adding new metrics and defining security constraints for the endpoint (totally recommended). Now, all our applications expose metrics in the Prometheus format, fetched by a Prometheus server, and collected in Grafana that we use for visualization and alerts.
One more aspect related to observability is checking the health of an application. Spring Boot Actuator is once again what we used to handle that. This library provides many powerful features I recommend you check out. One of them is a health check mechanism. You can expose a
/actuator/health endpoint over HTTP by setting the
management.endpoint.health.enabled property. As with Prometheus, Spring Boot Actuator lets you customize the health-check by adding your own health indicators. Our applications need a few backing services to provide their functionality fully. Some of them, like Redis for session storage or MariaDB for data storage, are automatically included by Spring Boot. Others, like external RESTful services, can be explicitly configured by implementing the
Overall, you want to check two different states of your application: whether it’s alive, and whether it’s ready to accept connections (for example, if it can’t connect with the database might not be available to process any request). Using the Kubernetes vocabulary, we could call them liveness probe and readiness probe. Before Spring Boot 2.3, you could use the health endpoint as a readiness probe, and implement your custom endpoint for the liveness probe. But now, Spring Boot Actuator provides both of them out-of-the-box and expose them through the endpoints
Cloud native applications are designed and built to leverage the characteristics of the cloud. They are scalable, loosely coupled, resilient, observable and manageable.
Migrating a traditional Spring application to be cloud native is a very satisfying journey. In this article, I covered three aspects of the migration and shared with you some hints and insights from my personal experience in Systematic and outside.
- Cloud native applications are self-contained. With Spring Boot, you can use an embedded web server and package the app as a JAR.
- Configuration for cloud native applications is externalized and never stored together with the app codebase. Spring Boot lets you specify any configuration through environment variables. Spring Cloud Config is a library to set up a centralized configuration server, that has the additional advantage of providing configuration traceability and secrets encryption. Kubernetes provides ConfigMaps and Secrets to address the same concerns.
- Observability is key for cloud native applications: treat apps like space probes. Logs should be handled as streams of events redirected to the standard output. It’s the platform’s responsibility to collect them and store them. Spring Boot Actuator provides features to expose Prometheus metrics and health-check endpoints, also in terms of liveness and readiness probes used by Kubernetes.
Have you had any experience with cloud native development? Are you considering moving to the cloud? I’d like to hear from you about your experience and challenges, leave a comment under this article, or reach out to me on Twitter or LinkedIn. If you like my article, you can find more on my blog, where I write about cloud native development, Spring, and application security.
Thomas Vitale is a Senior Software Engineer at Systematic, where he has worked on security and data privacy features. Currently, he’s working on modernizing their platforms and applications for the cloud-native world. Thomas is the author of Cloud Native Spring in Action, has a Master’s Degree in Computer Engineering, and is a Pivotal Certified Spring Professional and RedHat Certified Enterprise Application Developer. When he’s not building software or writing about it, Thomas plays the piano and enjoys traveling.