The End of Microservices

spoons
LightstepHQ
Published in
4 min readJul 26, 2016

A post from the future, where building reliable and scalable production systems has become as easy as, well, writing any other software. Read on to see what the future looks like…

Back in 2016, people wrote a lot about “microservices,” sort of like how they wrote a lot about the “information superhighway” back in 1996. Just as the phrase “information superhighway” faded away and people got back to building the internet, the “micro” part of microservices was also dropped as services became the standard way of building scalable software systems. Despite the names we’ve used (and left behind) both terms marked a shift in how people thought about and used technology. Using services-based architectures meant that developers focused on the connections between services, and this enabled them to build better software and to build it faster.

The rise and fall of the information superhighway (source)

Since 2016, developers have become more productive by focusing on one service at a time. What’s a “service”? Roughly, it’s the smallest useful piece of software that can be defined simply and deployed independently. Think of a notification service, a login service, or a persistent key-value storage service. A well-built service does just one thing, and it does it well. Developers now move faster because they don’t worry about virtual machines or other low-level infrastructure: services raise the level of abstraction. (Yet another buzzword: this was called serverless computing for a while.) And because the connections between services are explicit, developers are also freed from thinking about the application as a whole and can instead concentrate on their own features and on the services they depend on.

Back in the day, many organizations thought that moving to a microservice architecture just meant “splitting up one binary into 10 smaller ones.” What they found when they did was that they had the same old problems, just repeated 10 times over. Over time, they realized that building a robust application wasn’t just a matter of splitting up their monolith into smaller pieces but instead understanding the connections between these pieces. This was when they starting asking the right questions: What services does my service depend on? What happens when a dependency doesn’t respond? What other services make RPCs to my service? How many RPCs do I expect? Answering these questions required a new set of tools and a new mindset.

Tools, tools, tools

Building service-based architectures wouldn’t have been possible without a new toolkit to reconcile the independent yet interconnected nature of services. One set of tools describes services programmatically, defining API boundaries and the relationships between services. They effectively define contracts that govern the interactions of different services. These tools also help document and test services, and generate a lot of the boilerplate that comes with building distributed applications.

Another important set of tools helps deploy and coordinate services: schedulers to map high-level services to the underlying resources they’d consume (and scaling them appropriately), as well as service discovery and load balancers to make sure requests get where they need to go.

Finally, once an application is deployed, a third set of tools helps developers understand how service-based applications behave and helps them isolate where (and why) problems occur. Back in the early days of microservices, developers lost a lot of the visibility they were accustomed to having with monolithic applications. Suddenly it was no longer possible to just grep through a log file and find the root cause: now the answer was split up across log files on 100s of nodes and interleaved with 1000s of other requests. Only with the advent of multi-process tracing, aggregate critical path analysis, and smart fault injection could the behavior of a distributed application really be understood.

Many of these tools existed in 2016, but the ecosystem had yet to take hold. There were few standards, so new tools required significant investment and existing tools didn’t work well together.

A new approach

Services are now an everyday, every-developer way of thinking, in part because of this toolkit. But the revolution really happened when developers started thinking about services first and foremost while building their software. Just as test-driven development meant that developers started thinking about testing before writing their first lines of code, service-driven development meant that service dependencies, performance instrumentation, and RPC contexts became day-one concerns (and not just issues to be papered over later).

Overall, services (“micro” or otherwise) have been a good thing. (We don’t need to say “microservices” anymore since, in retrospect, it was never the size of the services that mattered: it was the connections and the relationships between them.) Services have re-centered the conversation of software development around features and enabled developers to work more quickly and more independently to do what really matters: delivering value to users.

Back in the present now… There’s still a lot exciting work to be done in building the services ecosystem, and here at LightStep, we are excited to be part of this revolution and to help improve visibility into production systems through tracing! Want to chat more about services, tracing, or visibility in general? Reach us at hello@lightstep.com, @lightstephq, or in the comments below.

Originally published at lightstep.com.

--

--