Cloud Computing History

Paweł Zajączkowski
PGS Software
Published in
7 min readJul 24, 2019

It’s safe to say Cloud technology has gained overwhelming popularity in recent years. There are several mature and well-established Cloud providers, alongside the numerous tech companies desperately trying to keep up in the race. Countless businesses, from small garage-grown start-ups to gargantuan multi-national corporations, are either already in the Cloud, migrating or preparing to move.

Meanwhile, the word Cloud itself has become so hot that everyone in the industry is trying to carve their piece of marketing cake from it and — somehow — accommodate it when describing their IT services. To better understand where the Cloud is going, it’s important to be aware of its origins and evolution. History likes to repeat itself — this common saying is very true in the IT industry.

In this article, we will explore the evolution of the Cloud, from the beginning to the present times and beyond. We will look at many factors and concepts that have played a crucial role in disrupting the industry, including time-sharing mainframes, the birth of the internet, virtual machines, containers, Software as a Service, Platform as a Service, Infrastructure as a Service, the race of giants, and more.

Shared Time Mainframe

We can say that the Cloud is just someone else’s computer that we use remotely. Following this idea, our journey into the Cloud computing origins starts in the 1950s, with the first concepts of time-sharing.

As computers were extremely expensive back then, it was unfeasible to buy one for each individual user in an organisation. Instead, several people would connect to a single shared machine and use it at the same time, to avoid wasting precious processor cycles, using so-called dumb terminals. The idea was first described by John Backus in the 1954 summer session at MIT and later continued by Bob Berner, in his 1957 article in Automatic Control Magazine, and W. F. Bauer via an scientific paper in 1958.

However, the first actual implementation, named CTSS (short for Compatible Time-Sharing System), was started in 1959 by John McCarty at MIT on modified IBM 704 and 709 computers. It was demonstrated in 1961. In the same year, Donald Blitzer demonstrated the PLATO II system.

Yet the first commercially successful product was the Dartmouth Time Sharing System, released in 1964. At the same time, the idea to treat computing power as a commodity, similar to electricity and water, led to the appearance of computer bureaus, where clients could buy processing power in the amount they needed to perform their calculations. This model functioned up until the 1980s, when the advent of cheap personal computers rendered it obsolete.

The Global Network

A second important factor in the emergence of modern Cloud computing was connectivity. This is a core fundamental of the technology: users should be able to access Cloud services from any place in the world.

The first mainframes and their users were mostly located in the same building or site. While local networks existed by the end of the 1950s, in 1960 J.C.R Licklider proposed a global network to connect existing computing centres. In 1962, Licklider was hired by DARPA as a director of the new office with a mission to connect the United States Department of Defence mainframes at Pentagon, Cheyenne Mountain complex and Strategic Air Command.

ARPANET, a much larger project based on packet switching, was started in 1966 and went operational in 1969, starting with 4 nodes but expanding to contain over 200 by 1981. This was a core part of the network that evolved into the Internet during the early 1990s. As the Internet became widespread and accessible, a number of web applications skyrocketed, creating a demand for servers and data centres to host them, which in turn again led to an idea of selling computer power as a commodity. History has come full circle.

Virtual Machines

The third important factor in the history of the Cloud is virtualisation. Users should be able to have virtual computers that are independent of the underlying hardware and are easy to move between on-premise and Cloud, as well as between different Clouds, with a configurable amount of processing power and memory. They should also have the possibility to shut down or start at any point.

Unsurprisingly, this concept is far from new. First, full virtualisation, where the virtual machine simulates enough hardware to allow an unmodified guest operating system to run in isolation, was experimentally introduced in 1966 with the IBM CP-40 and CP-67 operating systems.

The first hardware-assisted virtualization, where the machine provided architectural features to enable efficient virtualisation, was easier to bring to the market and first introduced in the IBM/370 machine in 1972, which ran under the revolutionary VM/370 operating system. The modern x86 virtualisation features were added to Intel processors in 2005 (VT-x) and to AMD processors in 2006 (AMD-V).

An important extension was operating system level virtualization, which led to the emergence of lightweight containers, especially Docker, that appeared in 2013 and empowered a microservice approach to software development. Many of these latter elements are still used today, but we can see their origins like much further back.

The ‘Almost’ Cloud

While we know a lot about Cloud computing history, it’s difficult to pinpoint when — and by whom — the term “Cloud” was first coined. The term has been used since the middle on the 1990s to denote something on the Internet, or “outside” when drawing network diagrams. The widespread of the Internet led to myriads of web applications that let users perform certain tasks, such as creating and manipulating documents, with everything accessible from a web browser with a click of a button.

To differentiate these from desktop applications, which had to be distributed and installed on the user’s computer, the term Software as a Service was commonly used. Salesforce was one key player in such an approach in the late 1990s and many more followed. Meanwhile, the Internet boom had two important implications. First, a number of start-ups, developers and applications were rising quickly, and there was a need to simplify and support the process of hosting new web applications. Developers wanted to code, rather than worrying about servers and deployment. The idea of a Platform as a Service was born, with the first being Zimki, launched in 2006. Google followed in 2008 with its App Engine — the first service of what would later become today’s Google Cloud Platform.

The second implication was that some Internet-based companies became very large and had a tremendous amount of computing power at their disposal. They needed this power to handle traffic peaks, such a Black Friday sale for eCommerce, but much of it was wasted during less demanding periods. The idea of renting this power to third parties in a well-controlled and flexible manner came as a means to solve this issue — which led to Infrastructure as a Service.

The True Cloud

Amazon Web Services was the first in the landscape of Infrastructure as a Service, or the proper Cloud computing origins as we understand them today. This offer initially started in 2003 with SQS queues, which were nice, but the actual revolution came in 2006 with the release of Elastic Compute Cloud. Also know as EC2, this lets users rent virtual machines billed per second of usage and use them instead of buying classic servers in a traditional on-premise data centre. Microsoft launched a similar service — Azure Virtual Machines — in 2010 and Google followed with Google Compute Engine in 2012.

Other companies soon realised the potential of Cloud computing and joined the race but, currently, Amazon, Microsoft, and Google are far ahead. Only three others — Oracle, IBM and Alibaba — are left in the field when it comes to market presence or range of service offers. Virtual machines are the core of modern Cloud platforms, but there are numerous other offerings to aid development of systems in the Cloud — networks, security mechanisms, functions, application and container engines, as well as various forms of storage, such as blobs, disks, network file systems, relational and non-relational databases, queues, API gateways, Big Data. This isn’t even mentioning the solutions that built on this, such as Machine Learning.

Going Beyond

Many of today’s Cloud offerings are the result of years of experience in a given area and were successfully used to power the world’s largest and most sophisticated projects as internal tools before they were released for everyone to take advantage of.

An interesting example of this is Google Spanner — a first and (currently) only distributed relational database with guaranteed strong consistency, relying on GPS and atomic clocks under the hood. It’s used to power the entire Google advertisement system — something where classic relational databases were just not enough. Advanced automatic security risk audits of Cloud deployments are another good example of this.

The general trend is to automate as many aspects of software system development and maintenance as possible, reducing the amount of required code, environment configuration and opportunities for mistakes. Technology circles between centralisation and decentralisation paradigms wherever it brings benefits, but all this happens behind the scenes more and more often.

This allows software developers to focus on the core business value, which can now be delivered faster to stakeholders. New technologies evolve rapidly and the Cloud is often a gateway to use them without spending millions on research and development. After all, someone already did all that and wants to share the end results, so why miss out on the opportunity?

Business Perspective

Cloud computing is a broad term and, although it gained widespread popularity in recent years, its roots date back to six decades ago. Some trends come and go, while others resurface again, as with many concepts in computer science and the wider IT world. Cloud providers and their offers evolve rapidly and new services, providing more efficient ways of creating and hosting software emerge daily. Businesses that are able to keep up with these changes stand to continually gain the most benefits compared to those who sit idly by.

References:

Originally published at https://www.pgs-soft.com.

--

--

Paweł Zajączkowski
PGS Software

Java Developer at PGS Software, Blogger at https://howtotrainyourjava.com/, Speaker, Aikidoist, Gamer, Lego Fan, Dreamer.