A Microservices Primer
Q&A with Tony Pujals
By Jon Bailey
When speaking with Tony Pujals, Director of Cloud Engineering at Appcelerator, his enthusiasm about microservices — along with APIs, distributed systems and scalable cloud architecture — is clear. At Appcelerator, Tony plays a key role in improving the process of building, deploying, orchestrating and monitoring containerized microservices. He is a member of the Docker Captains team as well as a dedicated user of the Node platform and the co-founder of the third-biggest meetup for Node developers globally. We sat down with Tony to to talk about the emergence of microservices, how developers should be adopting this approach and what he’s most excited about for the future of this architecture.
Can you talk us through how microservices came to be and its rise in popularity?
As a term of art, it is roughly two years old, but I would argue the idea behind microservices goes back a lot longer, at least to the original efforts around distributed object-oriented programming in the 1990s: being able to communicate with objects over a network. The evolution of service-oriented architecture (SOA) over the next decade called for the ability of self-contained application components to communicate over a network in a way that would provide accessible functionality.
I’m simplifying things a bit, but the problem with the distributed component frameworks was the need for special runtimes and approach to networking that predated the rise in popularity of HTTP-based communication, and the problem with SOA as it evolved was that it was onerously complex and heavyweight, especially compared to the emerging trend toward REST-based APIs that exploited the simplicity of HTTP semantics. In any case, I’d say the “component” concept of being able to deploy and expose some type of functionality over a network as opposed to releasing a single, monolithic bundle has been something that developers have desired for a long time.
When you think about it, the cost of deployment of an integrated, tested monolithic bundle is high; and when it takes a lot of effort to get software out the door, it leads to less frequent, epic releases, instead of small updates to individual services that can be managed independently. Classic three-tier architecture represents the kind of monolithic approach that has been prevalent for a long time, but the explosion of web consumers besides web browsers, such as smartphones, tablets, and the Internet of things, led to a corresponding explosion in API development. And then an “application” simply becomes the user experience with a particular device or system that consumes a set of APIs, and this naturally leads to the desire to evolve, deploy, scale, and update APIs individually to satisfy user expectations — not as part of less-frequent, epic releases.
So microservices has come to mean just this: an approach to deploying lighter, narrowly-focused services with less effort and ceremony, putting less strain on individual development teams. Netflix is largely given credit for pioneering this effort as they adapted their development, testing, and operations model and migrated from deployment of a single application bundle to over 600 microservices.
For example, if you’re responsible for user profiles, you should be able to own that service without having to worry about integrating and deploying it bundled with the rest of the large application on the same release cadence. You can maintain and update it separately. For an individual developer or small team, this means a lot less friction, a lot less cognitive overhead, which translates to higher productivity and quality.
Container technology, particularly Docker, plays a huge role in running these lightweight, isolated processes with the same benefits as virtualization without the overhead of provisioning virtual machines for each service. I would say that at least for the past year or so, the idea of microservices presumes the use of containers. Pragmatically speaking, you really need to be exploiting containers for an effective microservice strategy.
How do APIs factor into this?
Simplistically, microservices are are independent software components that perform a narrow range of functions and communicate through a message queue or contractually agreed interfaces — APIs.
Microservices present an opportunity to take advantage of virtualization and the desire to build more granular services, especially with mobile and the IoT coming into play. Suddenly, it’s not about shipping web applications. You need APIs that fulfill different experiences for different environments. What we refer to as an “app” could be a smartphone, a tablet or a connected device on your fridge. All of these use APIs on the backend. Microservice architecture provides an approach for effectively building, deploying, and independently evolving API implementations at their own unique cadence as necessary for these different apps.
For getting microservices out the door, it can help to adopt a platform like Node for building lightweight, high performance APIs. There isn’t a lot of ceremony involved in creating a service with Node that executes in a very fast runtime, well-suited for the type of I/O-oriented activity that characterizes the majority of APIs. Node certainly isn’t the only solution, but for many teams it is a big win.
How divided is the developer community on microservices?
There was more pushback on microservices in the early days. Some complained about having a new term for what they perceived as essentially SOA. Microservice is an instance of SOA, but without all the overhead and baggage that came to be associated with industry SOA. Perhaps the biggest complaint, however, has to do with the concern that microservices trades one set of problems for another. There are legitimate concerns about managing and monitoring operational complexity. At question is how to effectively manage and orchestrate so many services as you move from a relatively easy monolithic model to a huge cluster of independent, distributed services.
I suppose this is somewhat similar to the early days when people were still trying to figure out how to have an effective cloud strategy — you know it’s big and it’s happening, so it’s on your radar, but the picture isn’t perfectly clear and the recipes are still being written.
How can we start to solve the challenges associated with microservices and increase adoption?
Start removing the impediments for developers. When the technical and organizational processes involved in creating and deploying any kind of application are burdensome, it isn’t worth the effort to create a microservice for each and every API or worker you might consider.
Consider moving to a platform that involves less ceremony and effort to create and start new services. I’m partial to Node and Go, but these aren’t the only solutions. However, they do exemplify what I mean by lightweight in terms of ceremony, with very minimal project boilerplate and only a few lines of code to expose a network service. They don’t require an application server and a ton of deployment artifacts. A reduction in friction and overhead stimulates and encourages microservice development the same way that cheap branching completely changes the way programmers leverage branches as part of their development workflow.
These platforms also provide good performance with a tiny footprint. This is a perfect match for lightweight containers instead of heavyweight virtual machines. So again, remove impediments by leveraging Docker to be able to test services in environments that you can spin up in the amount of time it takes to start a process on Linux.
Like the analogy to the way that Git provides cheap branches allowing developers to quickly and easily isolate their code while developing a feature, Docker allows developers to quickly and easily isolate their runtime dependencies in environments that they can launch with a simple command. Thanks to Docker, it is easy to share these environments with other developers and testers and push them out through to production. Developers can launch containers that are exactly the same as what will run in production; and neither developers nor devops need to install specific dependencies on any machines other than Docker itself. Docker eliminates the friction that developers, testers, and devops experience when setting up an environment to be able to run a specific version of something.
You can definitely get started today removing impediments to transitioning to microservices to exploit their benefits. Docker has been production-ready for a long time now, so teams can at least start building a few API services and begin incorporating Docker in their workflow and environment. Even Netflix didn’t get to six hundred services overnight. With respect to service orchestration, the industry has made great strides over the past year. I personally am extremely excited by the latest orchestration features in Docker 1.12.
What changes can developers expect to see when switching from monolith to microservice? How can they plan for these changes?
If you go from managing a single deployment pipeline to managing many deployment pipelines and you’ve got a small team who will be responsible for all deploying and monitoring the services anyway, then the team’s workload has increased, not decreased. The way to mitigate this is through a commitment to automation. Automation is really a mandate even for larger teams with more devops personnel, however; it’s the only sane way to ensure repeatable, reliable deployments.
It will also be vital to plan to have a unified, centralized solution for logging, monitoring, and alerting. It isn’t pragmatic to have separate systems in place that don’t provide a cohesive picture to let you visualize how things are going in production.
You probably won’t be able to catch all the issues you could in the past for monolithic deployments with traditional testing pipelines. It becomes more important to implement limited rollouts (such as canary updates) with the ability to roll back immediately if there are any issues. As the confidence score increases during these rollouts, the number of users with access to the new service version should be automatically increased. The new Docker 1.12 update includes support for this with application-specific health checks used in the decisions made with this strategy.
Where do you see microservices going in the future? How will use cases continue to evolve?
A lot of what we’re able to do right now is thanks to changes in the technology available. The emergence of Docker really contributed to making a microservice strategy feasible and pragmatic. Docker rewrote the book for everyone by democratizing container technology for developers.
In the near future, I expect to see a lot of growth in microservice development thanks to Docker updates. The Docker 1.12 from this year’s DockerCon is a real game changer. It addresses challenging orchestration issues at the core engine layer.
In the project I lead — AMP, or Axway/Appcelerator’s Microservice Platform — we are exploiting Docker to provide very sophisticated support to teams for addressing the challenges we noted earlier by providing a seamless and unified environment for deploying, scaling, measuring, and monitoring microservices built with their own preferred programming stacks.
In addition to being able to deploy microservice-based APIs, we see the use cases evolving for our customers with the ability to deploy on-demand, event-based, and scheduled workers. We want our customers to be able to focus on functionality without having to think of infrastructure. We definitely see the intersection of “serverless computing” — the ability to run code without provisioning or managing servers — and microservices and microtasks as vital to supporting the use cases we can think of today as well as those we haven’t even conceived of yet but become possible when the technology is available to be exploited.