Beyond Moore’s Law

The Intel 80486 DX2 microprocessor (circa 1992) contained approximately 1.2 million transistors

The following essay is based on extracts taken from my book, The STREAM TONE: The Future of Personal Computing?

Over recent years, personal computing devices have become a lot more powerful and a lot smaller. Some have even become battery powered and wirelessly connected. Advancements that have, collectively, allowed us to leave our homes and offices, and compute on the move, from almost anywhere and at almost any time. These are bold steps forward, to be sure, but the capabilities that can be built directly into our individual personal computing devices, particularly the data processing capabilities built into highly portable devices, will always be finite, and the users of such devices will always push those capabilities to the very limit, no matter how much those capabilities grow over time. What happens when we reach such a limit and there is no more? We will obviously want more. We will probably need more. But, there will be no more. We will have hit a technological brick wall. In a world that has grown accustomed to continuous technological growth the consequences of hitting such a wall will be that progress is slowed, or even halted, in a whole range of different fields that have become highly dependent on the ever increasing capabilities of our personal computing devices. (Source: The STREAM TONE: The Future of Personal Computing? © Copyright T. Gilling. All rights reserved.)

What could possibly cause mankind to hit such a technological brick wall? Perhaps an event like the ending of Moore’s Law:

In 1965 Gordon E. Moore, co-founder of Intel Corporation, had a paper published in the 19th April 1965 edition of “Electronics”, titled “Cramming More Components onto Integrated Circuits”. The paper described the trend in the number of components, or transistors, per integrated circuit for minimum cost that had occurred between 1959 and 1964. Moore extrapolated this trend to 1975, predicting that as many as 65,000 transistors per integrated circuit for minimum cost would then be possible. The trend that Moore observed became an industry standard known as Moore’s law, which stated that the number of transistors on integrated circuits doubles approximately every two years. In actuality, Moore’s law is not really a law in the classic sense, but an observation or conjecture. So far, since its inception, Moore’s law has roughly held true, but as the size of the integrated circuit components get ever smaller it may not be able to remain so for much longer. As the components get smaller they become both harder and more expensive to fabricate, and also more difficult to operate with the level of reliability required by digital logic. (Source: The STREAM TONE: The Future of Personal Computing? © Copyright T. Gilling. All rights reserved.)

Given that integrated circuits power many modern digital electronic devices, the capabilities of such devices are all directly linked to Moore’s law. So, as the number of transistors on an integrated circuit increases so too does the energy efficiency, image sensor resolution, memory capacity, and processing speed of those devices. Over the years, these ever increasing capabilities, expressed in terms of faster processing speeds, greater energy efficiency, higher resolution image sensors, and more memory, have been used very effectively to market such devices to both new and repeat customers. In fact, marketing based on such increased capabilities has been a notable driving force for technological change within society, one that has not only spurred significant progress but also the premature and wasteful replacement of many modern technologies that were still useful, by creating the perception that such technologies had become obsolete due to the availability of newer and better technologies. Consumer demand for ever-greater capabilities is now unrelenting, and it is very likely that one day in the relatively near future that it will no longer be possible to cost-effectively add any more components onto an integrated circuit. At which point, the capabilities of many digital electronic devices, and the technology fields that they underpin, will start to plateau [i.e., we will have hit the technological brick wall that I mentioned earlier]. In fact, this has already started to happen to some degree, with the result that multiple integrated circuits are now being used in parallel as a way to cost-effectively increase energy efficiency, memory capacity, and processing speeds. The use of multiple integrated circuits for image sensing has not yet entered the mainstream, although it probably will not be very long before it does.[1] (Source: The STREAM TONE: The Future of Personal Computing? © Copyright T. Gilling. All rights reserved.)

Today, we have a predominantly download-oriented approach to personal computing (i.e., what we do with our personal computing devices), in which we download large quantities of data, such as ebooks, movies, music, operating systems, pictures, software applications, and web pages, that will then be processed (presented/played/run) on our local personal computing devices (e.g., desktops, laptops, smart-phones, tablets). This has been practical for two reasons, the first is the ever-increasing communications bandwidths supported by the Internet and the last-mile in particular, which have managed to keep pace with the phenomenal growth in downloaded data that has occurred over the last few years, and the second is because our personal computing devices have become highly-capable data-processors, thanks to Moore’s Law, which are then able to efficiently and effectively process all that downloaded data. It has also been wholly necessary, because in the past our telecommunications infrastructures were just not sufficient, in terms of affordability, availability, bandwidth, latency, and reliability, to allow us to architect a large-scale personal computing approach that was significantly different to this. However, the telecommunications technologies that underpin our increasingly digitised world are rapidly nearing the point where a far more efficient and effective approach to personal computing can finally be adopted; a streaming-oriented approach.

Next-generation telecommunications, starting with true Fifth-Generation Mobile Communications (5G), are expected to be highly affordable, high bandwidth, low latency, highly reliable, and ubiquitously available; unprecedented capabilities that will allow personal computing to be designed and operated very differently in the future. Next-generation telecommunications will allow all of our required personal computing functionalities to be streamed from remotely-located cloud computing-based data centres, using real-time communications protocols, over the Internet.

My book introduced a new technology ecosystem, known as the Stream Tone, which is designed to enable just such an approach. The Stream Tone…

…uses a Stream Tone Access Device (STAD) to access Stream Tone Transfer Protocol (STTP)-based content and services that are provided by a remotely-located Stream Tone Service Infrastructure (STSI) and delivered using a Stream Tone Telecommunications Infrastructure (STTI). (Source: The STREAM TONE: The Future of Personal Computing? © Copyright T. Gilling. All rights reserved.)

The STAD is a type of thin client, a STSI uses cloud computing, a STTI uses next-generation communications, and the STTP is a new real-time communications protocol.

The STAD contains an integrated circuit, or microprocessor, that is just sufficient to support its audio-visual presentation responsibilities and no more. All the hard work, the intensive data processing that is needed to create such content and services, is provided by a STSI. The cloud computing-based data centres used by a STSI are not as constrained by electrical power or physical space requirements as a modern portable personal computing device. For example, a modern smart-phone must be small enough to fit into a trouser pocket or handbag and be able to run for several days on its internal lithium-ion battery. In contrast, a typical data centre can be millions of times larger than a typical portable personal computing device and powered by a direct connection to the national electrical-power grid. Consequently, the types of integrated circuit used within a data centre can provide much more data processing than the integrated circuits used within portable personal computing devices. The data processing capabilities of integrated circuits used within the data centre can also be efficiently and effectively aggregated. When a STAD is used to access a STTP-based service the only thing that is likely to be of concern to a typical user is whether or not that service is actually able to provide the functionally and performance expected of that service. The technical capabilities, in terms of energy efficiency, memory capacity, processing speed, or any other relevant attribute, of the data processing hardware that was used to provide that service are no longer likely to be of any concern. So, whether single or multiple integrated circuits were used, or how many components were on those integrated circuits, becomes wholly irrelevant. (Source: The STREAM TONE: The Future of Personal Computing? © Copyright T. Gilling. All rights reserved.)

My book also introduced the concept of Comprehensive Remote Personal Computing (CRPC), which…

…is a Web-oriented approach to personal computing in which local personal computing functionality, previously provided by technologies such as operating systems and software applications, is either migrated in toto onto the Web or replaced with equivalent Web-based services that are then remotely accessed over the Internet. CRPC will even replace existing Web-based services that are currently accessed using a local Web browser, with remotely-sourced Web browsing services. CRPC is comprehensive because it will replace not just some but all of such functionality, and also because it will be a remote personal computing solution that is suitable for use by everyone, everywhere, including the billions of people that have not yet been able to fully embrace the Internet, personal computing, and Web. (Source: The STREAM TONE: The Future of Personal Computing? © Copyright T. Gilling. All rights reserved.)

I then suggested:

In a world based on Comprehensive Remote Personal Computing, enabled by technologies such as the Stream Tone, Moore’s law could come to an end tomorrow and no one would really notice. Personal computing as we currently know it would continue pretty much unchanged. Everything that could be done before would still be possible after. In short, the end of Moore’s law would not cause mankind to enter some sort of technological Dark Age because a new and better technology-architecture would have already been adopted. One that was far less sensitive to the number of transistors on an integrated circuit. Of course, Moore’s law is unlikely to end for a good few more years, or maybe even decades, but when it does, it will, undoubtedly, be a significant moment. One that will mark the end of one era and the start of another. The point at which the computing industry is finally forced to look to totally different types of architecture, chemistry, and physics for the construction of its hardware. However, whilst Moore’s law may eventually come to an end, the expectation that the quantity of data processing per unit of cost will double roughly every two years, which is the real-world consequence of Moore’s law, will most probably not. Society has become accustomed to such advances, and will expect them to continue unabated in the future. So, Moore’s law will probably be replaced with another, a corollary that effectively describes the natural consequence of the end of Moore’s law and the shift to cloud computing; that the quantity of data processing per unit of cost provided by remotely-sourced digital services will double approximately every two years. No one will really care how this is achieved, and any technology that is capable of achieving it will be deemed acceptable. All that will be important is that the new law is able to hold true for a substantial period of time, and because this new law will not be based on being able to perpetually double the number of transistors on an integrated circuit for minimum cost approximately every two years, that should not be a significant problem, in fact it should be pretty darn easy. (Source: The STREAM TONE: The Future of Personal Computing? © Copyright T. Gilling. All rights reserved.)

[1] As of 2017, dual cameras (i.e., two integrated circuits for image sensing) are now being included on many smart-phones.

© Copyright T. Gilling. All rights reserved.