A gentle introduction to distributed computing
Distributed computing means getting stuff done by multiple computers playing nice with one another. This is great since it means that work that would require one *really* powerful computer now can be done with several commodity computers.
The benefits of distributing computational work over several computers are numerous .
For example, if I am training a machine learning algorithm that will take 3 years to train. I can either go with one GIANT computer or ten smaller computers.
If I choose to go with the one giant computer — I may go 2.5 years of the training to find that the hardware has failed and warranties last only 12 months here in Australia I would be left without a machine 😨.
If I went with ten smaller computers and 1 fails after 2.5 years I have only lost my 10% of my computing power. So reliability of such systems is a major plus.
Another benefit — $$$:
However all these benefits come at a cost:
- Increased complexity
- Increased admin (basically 1.)
- Harder to write and debug software (basically 1. Also concurrency)
- Bunch of other issues that involved that involve 1. (basically 1.)
Much like life computing involves trade-offs. Distributed computing offers use with the trade-offs that are ideal for certain classes of problems. Also due to distributed computing coming out of its infancy, the level of abstraction for it has become much for sophisticated. This conclusion has opened up more cans of worms than it has consumed (and recycled) previously open cans. Thus I hope to write more on this topic in the future.