Silicon-based virtual worlds: nanosciences revolutionise information technology

The history of technology shows: Nuclear power for energy supply is a hopelessly outdated technology / Episode 5/12

R Schleicher-Tappeser
8 min readOct 27, 2022
Intel 80486DX2 (1992). Matt Britt at the English-language Wikipedia, CC BY-SA 3.0, via Wikimedia Commons

The first four episodes of this twelve-part series dealt with the history of atomic energy and the gradual improvement of new methods to explore the nano-worlds of atoms and molecules since the Second World War. This episode covers the incredibly rapid development of microelectronics made possible by new nano-materials — as well as the new world of digitalised information processing built upon their basis. The resulting shift of focus from material science to information processing lead to an increasingly independent development logic: more and more elaborate software often makes us forget the hardware upon which it is based — but it also allowed for progressively sophisticated methods to deal with energy and matter.

(This is a computer-aided translation by the author from the German original)

Electricity and the beginnings of communication technology

We have seen that the mysterious phenomena of electricity challenged the scientific upheavals that led to the discovery of unexpected worlds at the nanometre scale of atoms and molecules. From the beginning, the peculiarly non-material electricity was associated with information and communication. One of the first applications was telegraphy. The first electric telegraph was built in 1774 by Georges-Louis Le Sage, who used 24 different wires, one for each letter of the alphabet. In 1833, Weber and Gauss transmitted information over a longer distance with the first electromagnetic telegraph. In 1844, Samuel Morse constructed the writing telegraph, which led to the establishment of a worldwide telegraph network. In the late 1890s, Marconi developed radio telegraphy. The development of the electron tube (a particular form of discharge tube that played a central role in the development of quantum physics), from 1904 as a tubular diode, then as an amplifier tube, contributed significantly to the development of early information and communication technology. From the mid-twenties onwards, tube radio receivers were widely used and enabled a whole new type of mass communication.

Tubes, where small input voltage could be used to control a larger output voltage, could be used not only as amplifiers but also as electrically controlled on/off switches for logic circuits. The first functional programmable computer, built in 1938–41 by Konrad Zuse as part of the German armaments industry, still consisted of electromechanical relays. From 1942 to 1946, the University of Pennsylvania built the first fully electronic computer with 18,000 tubes, called ENIAC, commissioned by the US Army. But the tubes with their glow cathodes were vulnerable. As late as 1943, Thomas J. Watson, the head of IBM, is said to have said that there would only be a need for about five computers worldwide.

Breakthrough of electronic data processing thanks to transistor and semiconductor technology

The decisive breakthrough to powerful computers came in 1947 with the invention of the transistor at Bell Laboratories by Bardeen, Brattain and Shockley. Thanks to advances in materials science with their new research methods, it was possible to replace the costly vacuum tubes made of glass with glowing cathodes with power-saving solid components with semiconductor properties. The invention of the transistor is considered the foundation of microelectronics.

Semiconductors had been known for a long time. Ferdinand Braun developed the first semiconductor component in 1874 — the crystal detector. But it was not until 1939 that Schottky was able to provide the theoretical explanation for the diode effect in semiconductors based on quantum theory. The first transistors still had point-shaped metal contacts on germanium. Still, as early as 1948, Shockley developed the more reliable bipolar area transistor, in which the decisive effects occur at transition surfaces between differently “doped” crystals (i.e. containing intentional impurities). From 1951, the transitions were realised by changing the composition of the melt, from which individual thin germanium crystals were drawn. From 1954, the invention of “diffusion transistors” made it possible to produce larger quantities of reliable quality by doping the surface of a larger uniform crystal through the subsequent introduction (diffusion) of dopants in well-defined zones. In 1954/55, the first transistor radios achieved a commercial breakthrough in the USA.

Dizzying miniaturisation on high-purity silicon crystals

Individual transistors, optimised for good amplification properties without distortion, continued to be used in analogue radios and amplifiers for many decades. But further miniaturisation was crucial for the development of computers, which only needed transistors for switching. From 1958 onwards, Jack Kilby and Robert Noyse developed the first integrated circuits, in which several transistors and other electrical components were accommodated on a single semiconductor substrate. Initially, germanium was used for this purpose. Still, after the Siemens process allowed for relatively inexpensive production of high-purity silicon in 1954, silicon gradually prevailed as the semiconductor material for microelectronics. This was due not only to the fact that silicon, with a share of 28% in the earth’s crust, is abundant, but also to the fact that silicon oxide is an insulator with excellent properties.

A new chapter in computer history began in 1970/71 with the development of so-called microprocessors. For the first time, various logical tasks were combined on a single chip in a so-called CPU (Central Processing Unit) in such a way that the microprocessor could carry out a variety of tasks with a manageable uniform instruction set. This architecture, involving a separate and, in both cases, modular approach in software and in hardware, allowed for an unequalled scaling up of computing power with increasing hardware miniaturisation.

In the last fifty years, the performance of microprocessors increased at a rate unprecedented in the history of technology. The first microprocessor, “Intel 4004,” available in 1971 (i.e. 19 years after the first electricity production with nuclear energy), had 8000 transistors. Today’s commercially available processors from NVIDIA have more than ten million times as many (80 billion). In addition, the clock rate of the computing steps was increased to about a thousand times. This was only possible by reducing the size of the individual structures on the chip from 10 micrometres (the dimension of large bacteria) in 1971 to 5 nanometres (the size of macromolecules) in 2020 and learning to arrange them partly in three dimensions.

It was not enough to simply diminish the size of the tools because, in the dimension of atoms and molecules, one had to deal with entirely different dynamics. Not only in production but also in terms of how the transistors work. It starts with the fact that you can’t see 5 nanometres even with an X-ray microscope. In order not to produce too much scrap, the silicon crystal had to be incredibly pure and grown incredibly flawlessly according to earlier standards. Today, layers are deposited or produced by doping in hundreds of process steps, some of which are only a few atoms thick. Behind this are advances in materials science and solid-state physics that the public has hardly noticed.

Increasingly hived-off information processing

Building on this incredible development of “hardware”, it has been possible to develop increasingly independent “software” whose user interface is linked to the hardware via several levels of symbolic information processing that are not transparent to the layperson. What is discussed today as computer science, digitalisation, and artificial intelligence, has largely detached itself from the solid-state physical prerequisites.

Two generations ago, people still wanted to “understand” the technology they were using to some extent. Establishing simple causal relationships was largely possible based on the everyday experiences of those interested in technology. Today, this is less and less attainable due to the complexity of information processing, the lack of clarity of quantum physical phenomena in the nano-world and ever faster technical changes. Many have given up trying to comprehend what new technology is based on — and thus, social discussions about the development and use of technology are becoming increasingly difficult.

Within an unbelievably short time by historical standards, new possibilities for gathering, linking and accessing information, new communication contexts and new virtual worlds of experience have been and are being created, the individual social consequences of which we cannot yet even foresee. Within a generation, a large part of humanity has been provided with individual, globally networked devices that allow everyone to participate in a digitally mediated sphere. We find it difficult to judge how it differs from the world we were used to before, and how it reflects on it. We are spending more and more time on digitally mediated content, both privately and professionally. Economic value creation increasingly seems to come from the processing of information. The largest and most influential business enterprises seem to be almost exclusively those that deal with information and “software”. On the stock market, “technology” is almost exclusively understood to mean information technology because it can be scaled quickly, grow quickly, earn money quickly. Since once a solution has been found, it can often be applied widely.

Fifty years after the invention of the microprocessor (1971) and thirty years after the launch of the internet (1991), digitalisation has profoundly changed our societies. With artificial intelligence and quantum computing, this development is further accelerated — and presents us with ever greater challenges to cope with and steer these changes.

Since this series is mainly about energy technologies, I will not explore the other technological milestones and the manifold social consequences of the new information and telecommunication technologies. Within only one generation, a fundamentally new technology for dealing with information has spread worldwide, which in its still unforeseeable consequences can, at best, be compared with the development and spread of writing, which took much longer. I think that it is not too exaggerated to speak here of a new stage of evolution in which new, world-spanning spheres of interaction are emerging. More on this another time.

However, it seems essential to realise that the new information technologies are based on the discovery of the nano-scale worlds and their laws — and that this connection in the opposite direction forms the starting point for profound technological innovations in dealing with energy and matter.

Digitalisation has only just started to massively change other technologies

The rapid development of information technology has often obscured the view of other technologies, of the newly developed possibilities of dealing with matter and energy. But this is changing. Because the successes of the material sciences since the Second World War are gradually being applied in other areas as well. And the potential of digitalisation is beginning to fundamentally change the way we deal with matter and energy.

Information about material and energetic processes is no longer directly linked to them but is increasingly decoupled and can be stored, transmitted and processed separately. Thus — as we will also see in the coming episodes of this series — much more complex analyses, controls and manufacturing processes become possible. With a multitude of new electronically controllable technologies and the Internet of Things, the newly emerging information sphere can retroact on material and energetic processes.

Our view is expanding. Alongside matter and energy, information is becoming a tangible third central concept in the natural sciences. And thus also of technology. How much this is changing our view of living systems is the topic of the next episode in this series.

Originally published at https://sustainablestrategies.substack.com on September 30, 2022.

--

--

R Schleicher-Tappeser

SUSTAINABLE STRATEGIES. Writes about Technology and Society: Based in Berlin. Five decades of experience in energy, transport, climate, innovation policies.