The 8 Fallacies of Distributed Computing

I was doing some reading recently and stumbled across an interesting paper that I think bears a lot of merit, especially these days. I’m not sure whether or not this paper or the principles and fallacies it expounds are very well known to today’s younger devs, designers, and architects but I’ll admit, I’ve never seen it. That’s not to say I don’t recognize every one of these fallacies either in their exact worded format or some other form but I thought writing a few words about it was in order.

You can read the paper here. [pdf]

Back in 1994, a Sun fellow named Peter Deutsch came up with 7 assumptions that architects and developers can sometimes make, which can cause a whole lot of trouble in the long run. In 1997, James Gosling, creator of Java, coined an 8th fallacy. Collectively these fallacies have become known as the “The 8 fallacies of distributed computing”.

The network is reliable
I think this is pretty much a no-brainer no matter what time period you’re talking about. On the hardware side of things, I would like to think that nowadays we have a little bit of.a better grasp on how to run cabling, store router and switch technology, and overall just a better sense of where and how to place our hardware. Let’s face it, cables get tripped on, power goes out, router and switch rooms might get to hot and cause your equipment to overheat. Not to mention that these days, no matter what application or software you are providing, you will almost always be relying in one way or another on someone else’s hardware.

Software-wise, you’d need to know that on the core messaging level of the network, that everything is bulletproof, and we all know that that’s not really possible. I mean hey…it’s software after all.

Latency is zero
I think we can all agree that no architect or system designer in their right mind would ever assume that latency is zero…right? As the paper points out, it’s pretty interesting that over the years bandwidth has improved at a much faster rate than latency. Latency by the way, for those not exactly sure, is the amount of time it takes some given data to move from one point in the network to another. In today’s world of instant gratification, latency can be one of the single most important factors in your design. If your application isn’t snappy and doesn’t deliver lightning quick, you’ve just lost a customer.

Bandwidth is infinite
Even though bandwidth seems to be getting better and more plentiful by the year, we still must be wary of the amount of data that we shove onto our networks. I liken this to the ‘highway problem’ you see in most highways in major cities and urban areas around the country and the world. No matter how wide of a road you try to build to alleviate traffic, there will always be more cars and traffic to fill it up. In fact there are some that would contend that wider roads, and in our case network bandwidth, will actually invite more traffic. Think about how the media we consume on the internet has changed over the decades. It started with simple text, then text and pictures, then text, pictures, and video, then…you get the point. We are now in a society that consumes high-quality video content on demand with no tolerance for buffering, music, games, etc… Could we become so media crazy that bandwidth becomes a limiting factor again? Maybe not, but it’s certainly not infinite.

The network is secure
Ok, I don’t think much needs to be said here. In fact if your security engineer or architect actually assumes this at any time out loud, they should probably be fired…and then committed.

Topology doesn’t change
I would almost categorize this one under #1. The fact is with today’s wireless culture network topologies are changing by the hour. This on top of the fact that any organization which has implemented and deployed your carefully thought out design will eventually do something to change it. Not because they are evil or malicious but because things come up. Servers break and get replaced by different servers. What this amounts to is that you should expect the topology to change and plan accordingly.

There is one administrator
I’m not sure these days who would assume this, and of course it depends big time on the organization you are architecting for. If you’re designing something for a small office, you would probably already be told or know who the admin is and whether or not they have a team or not. In either case, it’s up to you to make sure you know who will be administering your network and systems and educate them accordingly. Don’t ever just assume you know who the admin is, or even what their responsibilities will be. Make sure you take charge of this and make it work the best way you want it to.

Transport cost is zero
As the author of the paper states, this can be taken to mean a few different things. On one hand, you can count cost as in ‘computer resources cost’. Taken to mean this, we harken back to #2 and #3 to some extent. All actions taken on the network will incur some cost in computer and network resources. This is obvious.

If you’re talking about cost as in cash $$, then you should always be mindful of the budget you have to work with. Unless you get a blank check 😃

The network is homogeneous
In today’s world, this thinking would be foolish. As the author points out about his home network, you’d be hard pressed today to find a completely homogenous network anywhere, either in the business or private sector. In any case, you should use open technologies and resources and shy away from proprietary technologies. I think that these days a lot of companies are still having to deal with the fallout from some of these poor decisions back when proprietary tech was a little more utilized. I know my own company is. I would also add that open technologies tend to stick around. Think json and xml. If or when a better technology comes along, there’s usually an easier transition because the developers who created that new tech are wary of the fact that everyone is using what they are trying to replace!

So there you have it, the 8 fallacies of distributed computing with my own 2 cents. Definitely worth checking out that paper and reading up a little more in depth on these if you ever plan on becoming more involved in architectural duties or even becoming a full blown architect.

Thanks for reading.

Originally published at