The awkward journey towards less backend

Wolfram Hempel
6 min readAug 15, 2016

--

No matter whether you’re building websites, single page applications or mobile apps, you’ve probably come across a strange phenomenon called a “backend developer”. Maybe you’re even one yourself.

In the olden days, being a backend developer meant that you wrote a long list of functions that mapped to HTTP requests. A POST to /item/14 was turned into createItem( id ) — and we liked it that way.

All that was left to check was that ‘item’ had all the necessary data in the right format, that the user that wanted to create `item` was allowed to do so, that a database connection was established and that we had just the right bit of MySQL to insert it into the database… oh, and of course to make sure to call mySqlEscapeString and send the right success or error code back…Yikes!

We could sense that something was wrong. Even the ones who didn’t grew a little weary after writing slight variations of the same getter or setter for the 200th time…

Very early on, efforts were made to change that. We’ve started to make backends more abstract, less abstract, more intuitive, more implicit, more explicit or even hide them away altogether. This post is a quick tour through the various efforts, strategies and solutions along this journey.

But be warned: I’m part of a team that (of course) believes to have the solution to this all. We’ll mention it at the very end — so read with care and don’t trust anything I say.

The early days: Adding structure
It quickly became apparent that backends needed clear structure. Common tasks like user-authentication or data-validation had to be reusable and the parts that structured data (Model), processed data (Controller) and displayed data (View) were best kept separate.

Frameworks like Zend for PHP or Spring MVC for Java made this possible at scale and are still amongst the most widely used tools today. They provide a template that allows large teams to collaborate on projects — but they also require a large team to do so.

Convention over Configuration
Eventually, a Ruby framework called “Ruby on Rails” introduced a revolutionary idea: What if we agree on a general way of doing things and only write extra code for the bits that are different? This proved to be an enormous time-saver and soon spread beyond the Ruby community. Frameworks like Grails, Play or CakePHP adopted the pattern for other languages and backend-development was soaring.

Auto-magical frontend wiring
Our backends became better structured, easier to write and quicker to develop — but there still was a massive trench separating them from the frontends that used their data.
A lot of technologies set out to change that. Microsoft’s asp.NET came with built-in controls that were already wired up to their respective server-endpoints and Sencha’s ExtJS as well as certain Backbone Models just worked when used with a standard conforming REST-API.

In theory this sounded brilliant — after all, a lot of effort is spent on managing communication and states between UI controls and server-endpoints. In practice though, the idea never really caught on. UI’s turned out to be too unique, workflows to be too specific for plug and play solutions.

Cutting out the middleman
Eventually, someone must have asked: “If all your backend does is relay information from your frontend to your database, why have a backend at all?” CouchDB or ElasticSearch addressed this by offering direct HTTP access to servers and clients alike. Built-in validation functions and cluster replication made this a viable choice for simpler use cases.

Adding Realtime
Decades of HTTP got us accustomed to the notion that changes will only arrive once we ask for them. Frameworks like Meteor stepped up to change this by pushing realtime updates to clients as they happened. This allowed for the creation of richer UIs and more collaborative apps and made Meteor an enormous success.

Data as Objects
“Hold on! Isn’t data just that — a bit of data?” someone at Parse must have asked. “Why are we focusing so much on how it’s stored, processed and transmitted instead of just using it?”

Consequently, Parse created a platform in which data was modeled as objects that could be arranged in collections. Parse came with a myriad of clients for different programming languages that allowed to create, read, update and delete these objects without having to worry about how they were transmitted or stored. Whilst keeping track of updates was still left to the user, Parse clearly was on to something. Soon Facebook took over and — sadly — discontinued the company earlier this year.

Universal data-sync
Clearly, Parse got something right with their simple objects that were shared across backend and frontend processes alike. And equally Meteor got something right by pushing updates to the client as they happened.

If you merge both concepts, you arrive at something called “data-sync”, data-objects that endpoints can interact with and that instantly synchronize their state in realtime.

This is the domain of deepstream.io (you have been warned). It’s a fast and scalable server that can plug into almost any database or cache.
Clients can connect to it using lightweight libraries that manage security and connectivity and provide a simple API for the creation and interaction of data-sync objects called “records”, as well as for sending and receiving events and issuing remote procedure calls.
Strong authentication and a granular permission model make sure that only the right user can access and manipulate the right data in the right way.

What makes this so powerful is that deepstream almost entirely takes the backend out of the equation. It’s a standalone server that you install and leave running, just like you would with a database.
This means that development time that would usually be spent writing backend code, saving and loading data, syncing changes and maintaining consistency with the server can now be used to build the aspects of your app your users can actually see.

But what about custom backend logic?
Data-Sync gets you a long way without writing a single line of backend code and is in itself enough for many applications. But sometimes you want to add your own backend logic as well.

For this, deepstream supports a concept called “providers”. Providers are just normal deepstream clients that live on the backend. They can manipulate records, but also respond to requests and send and receive events.

To make things more efficient, deepstream supports a concept called “listening”. Provider can “listen” to what clients are interested in and only provide updates for records and events that users are actually subscribed to.

So has the awkward journey towards less backend come to an end?
Hopefully not, how boring would that be? I can’t wait to see what the next generation of less backend will look like. Maybe it will be a decentralized data and logic store, somewhere between IPFS and Ethereum, maybe it will be a distributed, serverless cloud hybrid. Until then, head over to deepstream.io and give it a try (don’t worry, it’s free and open source).

--

--