Code is Fashion

Eric Florenzano
9 min readAug 18, 2015

--

Current Programming Trends

Once you write code for a few years, it becomes clear that even though writing code is technical, it is also fashion. Ideas and technologies become popular and then fade away for a variety of reasons ranging from legitimate to irrational (or even aesthetic as in fashion!) Most of the time the good ideas from each passing trend survive, and the bad ones fall by the wayside. Best practices change. Sometimes we forget about the lessons from the past. But this post isn’t about forgetting lessons, and it’s not supposed to be about the past too much. I just want to write down what I observe happening right now, and then in a few years let’s go back and see which trends were short-lived, and which were bigger advances. Here’s my take:

Types are so on trend right now

People are changing their attitudes toward types and type systems. Maybe it’s that we haven’t built good enough dynamic type systems yet, but people now associate static typing with performance and dynamic typing with a lack thereof. Dynamic typing advocates may say that a sufficiently advanced interpreter, with the benefit of its wealth of runtime information, can out-optimize compile-time static type optimizations, but that’s just not something I’ve seen happening in practice. It’s not only performance that’s driving this trend, though. In fact, performance may be only a secondary benefit.

The real reason people are switching to stronger type systems is because it helps us in our struggle against human error. For example with some minor tweaks to the type systems most are familiar with, and with a bit of added ceremony, we can eliminate null pointer exceptions to a certainty. The stronger the type system, the more kinds of errors can be eliminated just by looking at the code (or using tools to look at it), rather than needing to actually execute it or run a test suite. This gives developers more confidence that the code they write will behave in the way they expect it to behave.

Stronger type systems let us write better developer tools. They also make refactoring safer, and in a world where we inherit more code than we write, that’s a huge boon.

Examples of this trend include Swift, TypeScript, Python’s addition of type annotations. I think Scala also played a large role in starting this trend.

Declarative is in too

A lot of code we write today is written to address this pattern: some action takes place and now we need to change a bunch of stuff elsewhere to reflect that change. Maybe it’s an ops-related thing where someone wants more capacity, so they go to an admin UI and tell the system to spin up and configure 3 more machines. Maybe it’s an end-user, who has changed their username and wants that to be reflected everywhere in the UI.

Whatever the case may be, when you first approach a problem like this, the most straightforward way to solve it is to write imperative code to directly perform the requested action. Functions like AddMachines(3) or UpdateUsernameLabels(rootElement) seem reasonable and would be responsible for directly adding 3 machines or updating username labels everywhere beneath a given DOM element.

But what happens when your code has an error in the middle of its execution? How do you know what to do? In the case of adding machines, do you make that function retry and subtract the number of successfully configured machines? Sure, code can be written that accounts for all kinds of exceptions and escalates the most confounding errors to humans, but thinking of every possible edge case gets tiresome quickly, and is the cause of many production bugs — especially as a service becomes more popular and increases the likelihood of any edge case being encountered.

This is why there’s a trend to instead change to a more declarative strategy, where your code declares its desired state, and then have something else make the necessary changes to whatever it controls, so that the system ends in the desired state. Functions now look more like SetTotalMachines(10) or SetUsername(“ericflo”), and it’s likely they’d update a database table somewhere to persist this desired state. Then a mutation process can examine the current state and decide that, for example if we only have 7 machine instances running, we need to spin up 3 more instances to make 10, and then do that work.

It’s about splitting the what from the how. That is, systems written in this declarative style are essentially saying “tell me what you want me to do, but I’ll decide how to do it.”

Examples of this declarative trend are Kubernetes, CloudFormation, and certainly React.js.

Not just Declarative, Reactive

The flipside of the declarative trend is the requisite reactive trend. Since declarative systems are essentially systems that declare their desired state, we’re going to need companion systems that can react to these state changes with grace. It turns out that the best way to do this, according to current wisdom, is to compose a system from parts that communicate by sending and receiving messages.

This concept is interesting in that it applies to things like the frontend, where React.js allows you to build up a component tree that consumes DOM event messages and provides declarative-reactive API hooks to handle those events. However this concept also applies equally well to backend systems, where switching to a message-driven reactive architecture helps to promote services with loose coupling. This has real practical benefits like allowing different teams to use differing programming languages. As long as each team’s service can serialize/deserialize the same messages, their services can communicate with each other freely. It also allows you to upgrade or change parts of the architecture transparently.

On top of loose coupling, if you have a message-driven architecture, you can choose a durable store like Kafka or AWS’s Kinesis as your bus between systems, and get a bunch of other benefits: reliability, scalability, and the ability to replay historical streams of messages. Replaying messages between systems turns out to be useful in a variety of scenarios, like load testing, data migrations, auditing change history, and even debugging.

Recent examples of this trend are the rise of Kafka, Kinesis and Rails’s Active Job. Go’s channels and the Reactive Manifesto are also indicative.

Microservices

People have been burned badly by having a Rails, Django, or J2EE project get too big, monolithic, and spaghetti-coupled, that it becomes a big ball of technical debt that can never be unwound. Once you get into this situation, it’s hard to get out of it in an incremental way. No fun.

Instead of having one monolithic service, and now that decoupling parts of the system is easier due to message-driven architectures, what if we decompose our larger system into many highly-focused pieces? Each piece should do one thing, do it well, and shouldn’t rely on implementation specifics of other systems. This calls back to the very appealing UNIX philosophy, and sits quite well with developers.

This really is the logical conclusion of the message-driven backend architecture, where as long as we’re breaking things up, we may as well break them up into the smallest, most focused pieces possible. Often these microservices can be reused in many projects if they’re properly engineered, like lego pieces that fit together to create wildly different toys.

Of course microservices are not without their growing pains, as coordinating and managing many smaller systems together can be tricky. Although we’re quickly wising up, getting all of the pieces working together properly at capacity today can still be more of an art rather than a science, especially when it comes to concepts like backpressure and distributed request tracing.

Too many microservices are open sourced every day to list them all, but one example would be thumbor: a service for resizing images on demand.

Containerization

Deploying code sucks. Not only do you have to worry about your programming language’s dependency management, but also the server’s installed library versions and operating system version, and potentially keeping them all in sync across many machines. What if we could take all of that, from the operating system to the system libraries to the code dependencies and wrap that all into one big package that doesn’t run in its own VM, but runs more like a binary executable would?

This is containerization. And it’d be nice if we had a popular tool for creating, interacting with, and sharing those containers. That’s Docker! (Rkt, a different developer interface for containers, is also getting some early attention, but right now it’s still in its infancy.) Containers allow us to wrap a nice shiny box around all that frustrating deployment stuff and pretend it doesn’t exist.

Another benefit to containers is that they end up becoming a standard building block that can be composed together to create larger systems. It doesn’t matter if your app is built with Java or Python or Ruby or COBOL, if your containerized app can speak over a socket, it can talk to other containers without a care.

Just as with Microservices, there are also growing pains with switching to containers. The tools are still maturing and we’re still learning what and what not to do with containers. On OSX for example, container filesystem performance is so slow that it’s effectively unusable, so until that’s cleared up, many will opt to use Docker for deployment alone and not in development.

Docker and Rkt are examples of the rise of containerization.

Data Center Operating Systems

The past several years we’ve been building more and more sophisticated deployment tools, but containers solve so many of the problems we’ve been solving with provisioning tools at a different level in the stack. When I say deployment tools, I mean things like Puppet, Chef, or Ansible. They’re used to deploy and update specific libraries, applications, and configuration, and they can operate on large clusters of servers.

The problem is that once those applications are deployed, these deployment tools don’t do much to manage them as they’re running. With containers, we no longer need to deploy libraries. We only need trivial commands to deploy applications, so those tools features are less important. The real issues now become things like: what happens when an entire server crashes, or an app grows to use too much memory? How do we network this mesh of containers together in a sane way? How do we handle logging and monitoring for all these containers? This, to me, explains the rise of datacenter operating systems.

It allows us to stop treating servers as individual things that we need to care about, and we can start treating the pool of servers as one big resource that can be deployed into. My favorite analogy I’ve heard is that these systems force us to stop treating servers as pets to be cared for, and to start treating them as cattle to be herded.

Examples of this trend are Mesos and Kubernetes.

Dark Horse: AWS Lambda

AWS Lambda (and a crop of services like it that have sprung up over the last few years) allows you to upload your code to them, and they’ll invoke callbacks in your code based on events sent to their system, scaling up the number of workers running your code dynamically and transparently based on load. This is like the ultimate extension of several of these other trends! It’s a message-driven, reactive architecture, where you deploy to a massive pool of compute resources (AWS’s entire Lambda infrastructure) and the system manages the rest for you. It’s also the ultimate microservice: a single function. And Amazon’s system handles logging, monitoring, and everything else you need from a data center.

I’m not sure if this trend requires developers to abjugate just a bit too much control, what problems will arise from this idea, or whether it’s in fact the way of the future — time will tell. Maybe we’ve just come full circle back to CGI scripts, and we’ll end up doing this same dance all over again. For now though, this is definitely a trend that’s worth watching.

Examples of this trend are AWS Lambda and IronWorker.

This has all been a snapshot of my sense of these trends today. Some, like containerization, seem almost intrinsically good (if not mature.) Others, like AWS Lambda, are newer and less obviously good. What’s most interesting to me is how well these trends weave together! When you’re building a declarative interface, stronger types make your interface stronger. When you want a declarative system, you usually need a reactive counterpart. When you can wrap your app in a container, you can build more services. When you build more services you need better cluster application management. When your cluster management gets good enough, and your services get micro enough, and you declare your architecture as a set of message handlers, you get these worker architectures like AWS lambda.

Let’s check back in a year or two and see how these turned out.

--

--

Eric Florenzano

http://Soundboxing.co Virtual Reality consulting ⁽ᴰᴹ⁾ Likes React.js/Native, Go, Python, Kubernetes. Ex: Mochi, YC, Twitter, Gyrosco.pe.