Linux is the new JVM

Shawn Hartsock
5 min readMay 13, 2015

--

Why did anyone ever think the Java Virtual Machine (JVM) was a good idea? What could those decisions tell us about the ones we are about to make in the cloud computing revolution currently under way? Let’s try and cheat at the tests of life and take some crib notes from history.

Just like how an entire society functions based on the accretion of lower layers a technology stack works the same way. A city needs it’s sewer, water, trash, and transportation services to function properly or it starts to fall apart. If the sewer stops working city-wide the problem of the bus being consistently late takes a back seat. When everything beneath you is working right you pretty much ignore it. Back in the 90's the appeal of the JVM was this ill fated dream that the application developer could write code without directly dealing with the layers below, layers like the OS and hardware.

Back when I was writing web applications in the early 2000's I might want to develop on Linux and but my operations team (we just called them system administrators back then) deployed on Windows. In this case Java was a lovely thing for me. I could happily hack away on code in Linux-land and hand my Operations team a tiny contained representation of my application called a Web Archive (WAR) which could be as heavy or as light as we wanted. One WAR mapped to one application. Depending on my Application Server I had varying degrees of isolation from other applications.

Locally, on my one-laptop development and test environment, I might run a light copy of a database (yeah, I know… yuck) and a copy of our Application Server. The App server would have shared libraries we agreed on company-wide. I might use my own libraries. I might “over lay” my own version of a library for use just in my own WAR.

Using these details the shared information between my team and my operations team would be this web archive thing. And, this looks a lot like a container as they are used today. So why did we do this and what did we get out of it?

For me, the application developer, I got to use a known set of API. I knew that if my admins had JVM 1.2 in production as long as I wrote for 1.2 my API calls from dev to prod would match up. I didn’t have to care if they used big-endian MIPS hardware running OS/2 Warp (this particular combination likely never actually happened). I could go ahead and use little-endian hardware running Linux. We would be relatively fine.

For them, the system administrators doing operations, they didn’t have to be bothered every time I wanted to slightly customize an applications’ dependencies. They didn’t have to care if I played with a new programming language as long as it didn’t require I monkey with anything below my application archive.

Today, we see containers playing the same social role for developers and operations. As long as your container works with the stack below you operations gets to ignore your tickey-tackey little choices. This lets you live in the wonderful world of nobody cares. When nobody cares what you do, you have freedom. Freedom you can use to innovate.

If we squint hard enough some of the same forces that drove the evolution of the Java Ecosystem also drive the evolution of the container ecosystem. There’s a historical precedence here. Application developers still want to pretend hardware doesn’t exist. Operations people still want to deal with more important things than the fashionable ruby shoes the damn developer wants to try on this week.

From a top-down perspective…

I, the developer, get to ignore certain elements of the container system (hopefully, but there’s no standards yet like there were for WAR files) I get to ignore Operating Systems. I get to mostly ignore differences in precise library versions. I get to cherry-pick my own personal libs. I might even get to pick my whole programming language. And, I provide a tidy package to hand to operations with all the same damn problems that Java archives had because it was never that damn simple.

From a bottom up perspective…

I get to control my hardware choices. I get to control which OS I deploy and when… here loosely using the term OS to mean Linux Distribution. I also get to choose stable reliable core components to provide to my container inhabitants. For the most part I don’t have to deal with pesky developers arguing with me why they need that new library or this Rust thing… whatever that is.

It’s much the same story. The lines have moved. The API stay the same not because somone engineered this massive thing called the Java Virtual Machine (JVM) but because Linux has become our new monoculture. The net-effect is the same. Linux libraries and POSIX API become a defacto standard that remains roughly the same over most development stacks.

By analogy, Linux is the JVM and the JDK … that is Runtime and common API. Container managers and all their ilk fall into the Application Server niche of this ecosystem... that is the JBoss and Websphere niches of the bad-old days. Finally, those container thingies are like WAR files describing an application.

It’s not a perfect analogy so our crib-notes aren’t matched to the test-key. However, we now have some nice notes to try and use during the test-of-life. Some surprises are bound to pop-up but if we can see how the big picture worked when the Application Server companies duked-it-out we can make some educated guesses as to where to focus our attention and thinking when we really don’t know the right answer.

It also means that if you’re an application developer you should really push to make things so you don’t have to care about the things beneath you. You shouldn’t really have to care which container engine your operations team uses because you shouldn’t have to care about things you can’t control. An application developer should care very very deeply about their application. You’ve got a bus to catch, you have no time for plumbing.

If you’re an operations person, you shouldn’t have to care what the application developer wants beyond a certain point. Ruby 1.8.7 versus 1.9.3 when it’s above the container manager layer should just not matter to you. If you have to care, it robs you from caring about what you really really very deeply should care about. That’s things like hardware, operating systems, and rolling upgrades I imagine. You’ve got some pipes to fix, you’ve got no time for arguments about bus schedules.

If we do this right, container-land could be a much much better world to live in than JVM-land ever was. But, how do we make it happen? What do we need?

  • Can this be done?
  • Can we do this? Should we do this?
  • Is it worth it to us all?

I think it really is do-able, by us, and right now. And, I think it’s going to be worth it in a historical sense. If we work hard on the right bits now we’ll all have more time to sacrifice to innovation. And, that’s worth it because one time in ten you get a real humdinger for your time.

--

--

Shawn Hartsock

software engineer — at a hybrid cloud computing company