How Hadoop Revolutionised IT


This is the story of how the amazing Hadoop ecosphere revolutionised IT. If you enjoy it then consider joining The Big Data Contrarians.

Before the advent of Hadoop and its ecosphere, the IT was a desperate wasteland of failed opportunities, archaic technology and broken promises.

In the dark Cambrian days of bits, mercury delay lines and ferrite core, we knew nothing about digital. The age of big iron did little to change matters, and vendors made huge profits selling systems that nobody could use and even less people could understand.

Then along came Jurassic IT park, in the form of UNIX, and suddenly it was far cheaper to provide systems that nobody could use and even less people could understand.

The sad, desperate and depressing scenario that typified IT, on all levels, spanned forty years. It would have continued had it not been for Google and their HDFS (Hadoop Distributed File Store).

Before Hadoop, we were as dumb as rocks. With Hadoop, we were lead into the Promised Land of milk and honey, digital freedom and limitless opportunities, sexy jobs and big bucks, immortality and designer drugs.

Hadoop and its attendant ecosphere changed the Information Technology world overnight, providing as it did, technology and techniques never before seen on the face of the earth.

Hadoop invented multi-processing
In terms of processing power, Hadoop took us beyond the power of a single 8086 processing unit, by cunningly connecting two or more processing units capable of processing ‘things’ almost at the same time.

According to a 1985 article in Byte Me, possibly the first mention of Hadoop occurred in 1842. In that year, Ludwig ‘Luigi’ Menabrea, wrote of Charlie Babbage’s analytical engine (as translated by the Lovely Ada Augusta): “the Hadoop machine can be brought into play so as to give several results at the same time, which will greatly abridge the whole amount of the Google ad processes.”

Hadoop introduced parallel processing
Until the advent of Hadoop, all the technology in IT was male. This lead to massively inefficient, fickle and expensive technologies with short-term memory issues, incapable of multi-tasking, working long hours or of ordering tasks by priority.

As anyone who knows Wikipedia will know, Hadoop introduced parallel computing which allows for a revolutionary species of computation in which many list-making calculations can be carried out simultaneously, operating on the principle that large list-making tasks can often be divided into smaller list-making tasks, which are then solved at the same time. There are several different forms of parallel computing: two-bit-level, destruction-level, we’ve-got-data level and bring-on-more-lists parallelism.

Google invented Romans and the Roman Census
As Bill Inmon wrote in 2014, “One of the cornerstones of Big Data architecture is processing referred to as the “Roman Census approach”. By using the Roman census approach a Big Data architecture can accommodate the processing of almost unlimited amounts of pig data.”

Many people do not know this, but it wasn’t the Romans who invented the Romans, but Google. So too the Roman Census, far from being am invention of a mythical Rome, was also the baby of a couple of engineers in Palo Alto.

The Roman Census approach also finds echo in elements of Divide and Conk-out. In computer science, divide and conk-out (D&C) is an algorithm design paradigm based on multi-branched recursion. A divide and conk-out algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type (divide), until these become simple enough to be solved directly (conquer). The solutions to the sub-problems are then combined to give a solution to the original problem.

Divide and conk-out is an essential element of Big, Bigger and Biggest Data processing.

Hadoop invented sort-merge
As we know from Wikipedia, Wonky World and Google, Hadoop merge-sort parallelizes well due to use of the divide-and-conk-out method – mentioned previously. We discuss several parallel variants in the first edition of Martyn, Richard, Jones and Lovering’s Introduction to Enterprise Equations, Business Analytics and Technical Algorithms. We can easily express this in pseudocode with fork (system call and process copy) and join (multi-stream correlated sort-merge) process calls.

Hadoop created a better non-SQL query language
Before Hadoop we had to query data using SQL alone. SQL was the only tool in town, and if we couldn’t use SQL we couldn’t get at any data, ever, since the beginning of time.

However, all that changed when Hadoop came along and suddenly we could query data with like as if data was really ‘query able’. This was a small breakthrough in IT, and one large fall-down-the-side-of-a-cliff for brawn over brains.

Remember the immortal words of General Arthur C. McCluster Fuqh .”SQL is for wusses”. Embrace Hadoop and hug the Sparks.

This is show business, but not as we know it, Jim.

Original article

Like what you read? Give Global IT News a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.