Steve Chalmers
Aug 23, 2017 · 2 min read

Looking back at the last 40 years in the computer industry, I think the total surface applications are written to and integrated with is the real issue. Calling this “serverless” is good marketing, but it’s a distraction from the real technical issues.

The game for decades was lock-in: whether you wrote for an IBM mainframe, a DEC VAX running VMS, or one of Microsoft’s plethora of libraries or execute-code-in-your-Word-document environments, the day you started writing code you became part of their ecosystem, and your code helped sell their particular computer and OS.

The counter to that lock-in was Unix (which then became 5 incompatible proprietary unixes but at least it was easier to port between them), and Posix APIs, and finally the collection of parts we call Linux. And the disaggregation of hardware from software, which really means Intel won, but the topic here is the dependencies created by how software is written (calling libraries and intrinsics, for example, rather than pure use of language features) and integrated (let’s start with the scripts and config files for doing a build, but there are also more basic things like telling the application where to find its storage (files, objects, raw volumes) in the semantics of where it is actually running, and how it opens a socket or other network connection to a particular peer, sister site, or the outside world).

Java wasn’t about writing code, it was about abstracting the surface the application was written to so that the app was completely portable. And maybe to limit the amount of damage a malicious app could do. History will judge here…flames on the nuances of Java to /dev/null, please.

Likewise I remember in the late 1990s strategy discussions over how we wanted to host multiple applications on a server, which came to a conclusion closer to containers, after which VMware simply got to market and won. It’s an execution and critical mass game, not a technology theory game.

So in theory, I think serverless will mature and become ubiquitous when the surface to which the function code is written becomes simple, disentangled from 40 years of legacy spaghetti apps run over today, using a stable interface which changes infrequently (think Android release frequency, or less). That’s theory. In practice someone will probably do what VMware did, and just drive something more practical and less elegant to critical mass.

But the application (and application steps/functions) will be the center of attention, not the infrastructure, and that is as it should be.

)

    Steve Chalmers

    Written by

    Student of complex systems; prematurely retired from a career in tech focused on the boundaries between server, storage, and network in the data center.

    Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
    Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
    Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade