Emancipation from mechanics — the long road to modern power electronics

History of Technology — The Discovery of Nanoworlds Enables Renewable Energy Supply for All / Episode 14

R Schleicher-Tappeser
24 min readAug 31, 2023

Translation from the German original by Dr Wolfgang Hager

The importance of power electronics for the transformation of the energy system is vastly underestimated. It is fundamentally revolutionising the way we use electricity. Everyone is familiar with images of solar cells, wind turbines, nuclear power plants, LEDs, heat pumps and microchips. But hardly anyone has any idea of what charge controllers, frequency converters, IGBTs or MOSFETs are. Yet today, power electronics can be found in every charger, in every LED lamp and, of course, in every electric car. After various precursors, modern power electronics first developed with quantum-theory-based semiconductor technology. Its significance lies firstly in the fact that it can be used to convert electricity almost at will, so that its characteristics can be adapted to a particular end use with very low losses. Secondly, it makes it possible to digitally control these conversions and switch with little effort. Compared with the previous electromechanical technology, these two aspects open up completely new possibilities for the generation, transmission and utilisation of electricity that are far from being exhausted.

In this episode, we will first look at the history of power electronics up to the turn of the millennium, which evolved, with a time lag, from signal electronics and is closely linked to it. Without the key elements of this history, the current and future significance of power electronics can scarcely be understood.

The next instalment of this series will deal with the breathtaking development of power electronics since the turn of the millennium: Drastic miniaturisation, dramatic cost reduction and rapid diffusion are creating entirely new possibilities in all areas of electricity generation, use and distribution with much greater flexibility and efficiency. This is what makes electricity a flexible universal energy, opening up a new era of energy technology.

Power electronics — the electrical equivalent of the mechanical gearbox

In mechanics, levers and gears are used to convert mechanical energy: low force with long stroke into great force with short stroke, fast rotation into slow rotation with correspondingly greater force, slow oscillations into fast oscillations — all in both directions. Gears and winches existed in ancient times. Gears in mills served to convert the mechanical energy of the wind or a river into usable forms, to transmit power.

Mechanics were also used to transmit and process information. Mechanical levers, linkages and gears were used to control mills, sewing machines, sawmills and looms. Gears in mechanical clocks and mechanical calculating machines were not used to transmit power, but to transmit and process signals. But mechanical signal processing was always highly specialized, not very flexible and usually closely tied to mechanical transmission and application of power.

Walther WSR 160. Mechanical calculating machine, produced 1956–1968. Wikimedia

With the development of electrical engineering, signal and information processing has become increasingly independent of power transmission. In episode 5, we saw in broad outline how the classical physics of electrodynamics first allowed the development of electron valves and thus non-mechanical signal processing. Then, with the help of quantum theory, semiconductor technology emerged, which in turn gave rise to digital microelectronics. Finally, this led, at a completely different level of abstraction, to the development of computer science.

In this episode, we will revisit this history from a different perspective, as power electronics is the purely electrical equivalent of the mechanical gearbox for power transmission. Signal processing is not its function, but it is based to a considerable extent on technologies that were first developed for signal processing.

It has been a long road to its current state of development, which opens up entirely new possibilities. Power electronics has evolved over time from other areas of electrical engineering and electronics, and for a long time it was not even perceived as a separate technology — this is probably one reason why its importance is so little known.

In order to understand the current revolution in our handling of energy, it seems important to first trace the various stages of this development and their significance for the explosively growing possibilities of energy conversion and control. Here again, as frequently pointed out in previous blog, we see the fundamental the role of the revolution in physics a hundred years earlier, and of the growing importance of information in relation to energy and matter for coping with the growth crisis of our civilization, which threatens its very existence.

The beginnings of electronics — emancipation from mechanics

Systematic research into electricity began with direct current from Alessandro Volta’s zinc-copper battery in the 1800’s. In the twenties and thirties, Oersted, Henry and Faraday discovered relationships between electric current and magnetic fields. Maxwell then succeeded in 1864 in grasping the relationships between electric and magnetic fields mathematically (electrodynamics). This led to the generation of electrical energy from mechanical energy (steam, wind and water power), as described in the previous episodes, and, conversely, to the widespread generation of mechanical energy from electrical power. At the same time, completely new technologies were developed on the basis of Maxwell’s findings, in which mechanical motion played no role. For these novel developments, electro-magnetic oscillations were of central importance.

The transformer, invented in 1881, was already based on the interaction of alternating electric and magnetic fields: Without the diversions via mechanical energy, large amounts of electrical energy could be transferred between two voltage levels — a first electromagnetic “transmission” in which the voltage and current strength of an alternating current are changed, but the frequency remains the same. However, for the conversion of alternating current into direct current — which was important for many applications even after the utility grids were converted from direct current to alternating current (see episode 12) — it was still necessary to resort to mechanical energy: a sliding contact with several poles (commutator) rotating synchronously with the alternating voltage ensured that the poles were regularly reversed. Such a device was an integral part of Edison’s DC generators, of other types of electromechanical converters between different types of current, and it was used until the 1970s to supply the charging current for car batteries. It allowed only one, relatively low AC frequency to be rectified at a time.

For the further development of current conversion without mechanics, communications technology, i.e. a signal processing technology, initially played the decisive role. As early as the 18th century, there were efforts to transmit signals with electric pulses through wires. From 1830 onwards, telegraphy developed with broadly usable techniques, which were promoted in particular by the railway companies. From 1850, submarine cables were also laid for this purpose. By 1870, all continents were connected. Rapid communication over great distances changed both trade and politics. An impressive example is the way the English Empire operated in India after news between Delhi and London no longer took several weeks but only a few minutes: led on a short leash from headquarters, colonial policy became much more ruthless.

Heinrich Hertz used such a spark inductor from Rühmkorff to prove the existence of electromagnetic radio waves. WikiMedia

In 1888, Heinrich Hertz — after whom the unit for frequency is named — succeeded in generating the electromagnetic radiation predicted by Maxwell in the form of (ultra-short) radio waves. To do this, he excited an electric oscillating circuit with a high-voltage pulse, which consisted of a coil (wound wire) and a capacitor (two metal surfaces that were close together but insulated from each other). By adjusting their sizes, the (high) frequency of the oscillation between the electrical field (capacitor) and the magnetic field (coil) could be set. Due to the open construction of the oscillating circuit as an antenna, the energy was gradually radiated as a radio wave. This proved that electromagnetic energy can exist and be transmitted as radiation independently of electrical conductors.

In practical terms, this laid the foundations for radio technology, which was expected to be used not only for the transmission of telegraph signals, but also of sounds. Ever since the discovery of induction by Faraday, there had been various considerations and attempts to transform the mechanical vibrations of sound into electrical oscillations and then to transmit them over longer distances. Finally, in 1876, Graham Bell presented a usable telephone at the World’s Fair in Philadelphia. He also improved the phonograph invented by Edison, which allowed the permanent recording and playback of sounds. This created a huge interest in the exploration and processing of electrical oscillations of a wide variety of frequencies.

For a long time, only pulses could be transmitted with transmitters based on the Hertzian oscillating circuit, which, however, allowed the increasingly long-range transmission of telegraphic signals. It was not until 1904 that Valdemar Poulsen succeeded in transmitting sounds with the arc transmitter, which could emit continuous radio waves. For this purpose, the intensity (amplitude) of the fast oscillating radio waves (carrier frequency) was “modulated” with the much slower oscillations of the sounds (signal frequency), which was accomplished with a simple coil construction (transducer).

To receive the Morse pulses and later the sounds, it was first necessary to capture the radio waves with an antenna and an oscillating circuit tuned to the radio frequency of the transmitter, and secondly to make the intensity fluctuations audible. This was done with a “crystal detector” that acted as a rectifier of the high-frequency alternating voltage, so that only the modulated audio frequency remained and became audible in headphones. These crystal detectors were the first semiconductor diodes. In 1874, Ferdinand Braun had discovered that point contacts of metal tips on lead sulphide crystals only let current through in one direction — but the physics behind this effect was not yet understood. Ten years later, Jagadish Chandra Bose was the first to use this effect for radio reception with a crystal detector.

Radio receiver with crystal detector, 1924. WikiMedia

Shortly before (1883), Edison received a patent for a DC voltage regulator that used an effect he had rediscovered: within a light bulb, a current can flow between the glowing wire and an additional contact (glow emission), and depends on the voltage of the heating current. Edison’s device can be called the first electronic circuit. But it took a good twenty years until John Ambrose Fleming used the fact that Edison’s glow emission device only allowed one direction of current to build a detector for receiving radio signals. In 1904, he patented a further developed “vacuum diode” in which current — now physically explainable — could flow through the vacuum in the form of negatively charged electrons from a glowing cathode to a cold anode.

This fundamental technical invention had become possible against the background of a lengthy scientific development: As early as 1834, Faraday had suspected, based on electrochemical experiments, that there must be an electric elementary charge. Since the 1860s, experiments had been conducted with gas discharge tubes to find it. In the 1870s, the term “electron” was coined, and it was not until the 1990s that it was proven that cathode rays produced in vacuum tubes could be regarded as particles, as electrons. Thus, with the tube diodes, the term “electronics” came into existence.

With crystal detectors and tube diodes, the first electronic rectifiers were available. While the semiconductor effect of the crystal detectors was still not understood, the functioning of the tube diode could be well described with the physics of the time.

As early as 1906, Robert von Lieben (Austria) and Lee de Forest (USA) filed parallel and almost simultaneous patents for electron tubes in which the electron current between cathode and anode could be controlled. For this purpose, a low voltage, negative with respect to the cathode, was applied to an intermediate metal grid, which more or less (depending on the voltage) slowed down the electrons. This made it possible to precisely control electrical currents without mechanical devices using weak electrical signals with almost no time lag. This was the birth of the electronic amplifier and — as a special case — also of the electronic switch.

This opened up a huge field of technical applications in the first years of the previous century. The amplification and filtering of any desired electrical signals of the widest range of frequencies — with increasingly sophisticated electronic circuits made of increasingly standardised capacitors, coils, resistors and tubes — made it possible to process sound signals (audio technology), carrier frequencies (broadcasting and communications technology), measurement signals of all kinds (metrology), and soon also image signals (television technology).

Radio, Zenith Model 705, 6 Vacuum tubes, ca. 1934. WikiMedia

During the First World War, hundreds of thousands of soldiers were trained as radio operators and came into contact with the new technology. This subsequently contributed significantly to the widespread interest in the new possibilities of electronics. From the 1920s onwards, public radio programmes emerged all over the world, mostly with the state trying to control them. Politicians of all colours discovered radio as a new medium with which they could reach the population directly. The number of private radio sets increased rapidly. In 1932, 4 million listeners in Germany had a licence from the Reichspost. In 1935, it was estimated that 60 per cent of all US households had a radio and that these accounted for 40 per cent of the world’s existing radio sets. Mass production of radios led to standardisation and cheapening of electronic components, which also benefited industrial applications (just as in recent years the massive demand for sensors to be built into smartphones has helped quite significantly in the development and cost reduction of industrial robots). With increasingly reliable tubes, from the 1940s onwards, people also ventured to install large numbers as digital switches in the first computers, although this proved to be very costly (see episode 5). For the time being, analogue electronics reigned supreme.

These successes in signal processing thus gave rise to an independent branch of technology, electronics, with which large amounts of non-purpose-specific information could be transmitted by wire or wirelessly, independent of mechanical energy.

With the electron tube, moreover, an amplifier was now available with which electrical currents could be precisely controlled by any weak electrical signals at great speed. Such control was not possible with the forms of energy available until then (mechanical, chemical, and thermal), neither in this universality of use nor at this speed. It was not possible to control large machines with the then available capacities, but they were sufficient for signal transmission over any distance, for the analogue analysis of signals with filters, for the realisation of digital logic circuits, as well as for signal delivery via loudspeakers and screens.

First electronics for high power applications — rectification and lossless control

To operate all these electronic devices, reliable DC sources of various voltages were required. For this purpose, the alternating current from the socket was brought to the required voltage level with transformers and then rectified with powerful tube diodes. These delivered a pulsed “half-wave” that still had to be “smoothed” with suitable circuits of capacitors, coils and resistors so that the “hum” of the mains frequency could not be heard in the sound signals. Transformers, rectifiers and smoothing circuits together were called power packs. These can be seen as the origin of power electronics.

A rectifier with a simple diode cuts off one direction of current. Alternating current becomes pulsed direct current. This can then be smoothed with filters.
A rectifier with four diodes in bridge circuit reverses the current direction of a half-wave. The effort for smoothing is reduced. WikiMedia

However, vacuum diodes with light cathodes were too weak for higher outputs. For high voltages, the mercury-vapour rectifier (or mercury-arc rectifier) developed by Peter Cooper-Hewitt in 1902 was soon adopted. Here, an arc in mercury vapour served as an electron-emitting cathode (positively charged mercury ions in the tube compensate for the space charge of the current-carrying electrons and thus allow high power). In industry, metre-high installations were built for the operation of electrochemical plants, trams and other large-scale DC consumers. Until 1975, mercury rectifiers were used for high-voltage DC transmission (see below).

Mercury-vapour rectifier, AEG 1928, Zugspitzbahn. WikiMedia

For a long time, people were looking for ways to control the power of the mercury rectifier in signal electronics in analogy to the electron tube. But this was not so easy, because once “ignited” the current in the arc could no longer be controlled. It was not until the 1930s that the Westinghouse Corporation succeeded in influencing the output of mercury rectifiers using a different approach. To do this, they made use of the fact that the current line had to be “re-ignited” with an auxiliary voltage (as in fluorescent lamps) at every cycle of the oscillating AC voltage to be rectified. By slightly delaying the ignition, it was possible to reduce the average current flow. This principle of “phase control” is still used today, e.g. for dimming lamps, but is achieved with different components. This further developed mercury rectifier was called Ignitron, it could process power up to several megawatts. From 1954, electric locomotives were built with Ignitrons. This made low-loss stepless control possible, allowing jolt-free starting.

“Phase angle control”: regulation of power with ignitrons or thyristors by delayed firing of the rectifier at the beginning of a half-wave. WikiMedia

Quantum leap — new physics opens up new age of energy technology with semiconductors

The fact that, in addition to electrical conductors and non-conductors, there are materials that fall in between and do not behave in an immediately comprehensible way was already known in the 18th century. Alessandro Volta called them semiconductors in 1783. At that time, wood and soap were also considered semiconductors and the electrical properties of many materials were investigated. It was easy to understand Ohm’s law (I=U/R) published by Georg Simon Ohm in 1825 after measurements on metals. For semiconductors, however, no law could be found, except that some configurations only allowed current to flow in one direction.

Until the description of crystal diodes by Ferdinand Braun in 1874 (see above), no progress was made. But even the widespread use of crystal detectors — the first semiconductor components — in the early days of radio technology did not change the fact that their physical mode of operation was not properly understood. While they were then largely replaced by tube technology because of their unreliability, another semiconductor component came into use in large numbers from 1930 onwards: the selenium rectifier. For power supplies in all kinds of electronic devices, it proved to be more durable and cheaper than tube diodes, but was unsuitable for other purposes due to its unfavourable properties.

Therefore, from the late 1920s onwards, efforts to better understand the semiconductor effect increased and different materials were tested, especially silicon and germanium crystals. From the early 1930s, military interest in fast-switching semiconductor diodes also grew: For the development of radar systems, high frequencies in the microwave range were needed, which tube diodes could not provide.

Selenium rectifier, ca. 1960. WikiMedia

The British mathematician Alan H. Wilson was the first to introduce the quantum mechanical approach to semiconductor physics in 1931 during his stay at Werner Heisenberg’s institute in Leipzig. While the electron conduction in vacuum and mercury diodes could still be comprehensibly explained with classical ideas, the nanoscale processes at the contact points between metals and semiconductor crystals eluded theories trained in the macro world. On the basis of quantum theory, which had been developed for atoms, Wilson showed that even electrons in a crystal lattice cannot assume arbitrary energy values, but are limited to certain state ranges, the energy bands, which can be decisively influenced in semiconductors by impurities. In 1938, Walter Schottky achieved a largely satisfactory explanation of the semiconductor metal junction (Schottky diode) by assuming that current conductivity in semiconductors is not due to excess electrons but to missing electrons (“defect electrons”, holes).

Russell S. Ohl, who had worked at Bell Laboratories since 1927, was convinced that the unreliability of the crystal detectors was due to impurities in the crystals and then researched the production of high-purity silicon crystals for radar development. In the process, he accidentally made a discovery in 1940 that was fundamental for the further development of electronics. In a silicon sample, he noticed a break between two obviously different crystal structures. When a voltage was applied, this transition showed the behaviour of a diode. In addition, when illuminated with visible light, a voltage was formed at the boundary layer. He had thus discovered the diode effect of a p-n junction (later so named) on the one hand and the photovoltaic effect of silicon diodes on the other (see also episode 10). The different concentrations of impurities in different zones of a silicon crystal had led to the presence of excess electrons on one side of the boundary (n-conductor) and to missing electrons, or holes, on the other (p-conductor).

While research into silicon cleaning methods and silicon-metal diodes for radar systems continued at full speed, Bell Laboratories kept Ohl’s discoveries secret until patents were granted in 1946. It was understood that the militarily driven advances achieved in large collaborative research networks would lead to significant industrial developments and intense competition after the war. The brilliant William Shockley, who had worked for Bell before, could be won back to Bell after the end of the war and immediately recognised the importance of Ohl’s discoveries.

In 1947, two of his co-workers, Brattain and Bardeen, invented the point-contact transistor based on the point-contact diode, in which, similar to an amplifier tube, the current in the diode could be regulated with a low voltage at a third contact. This made it possible to build amplifiers with semiconductors as well. A few months later, in 1948, Shockley succeeded in building a much more reliable transistor based on p-n area contacts (field-effect transistor). Shockley, Brattain and Bardeen received the Nobel Prize for the invention of the transistor in 1956. Ohl was largely forgotten. It was not until 1950 that Shockley succeeded in providing a complete quantum mechanical explanation of the processes in the semiconductor contacts and thus also of photovoltaics. The first solar cell with a silicon p-n junction was presented by Bell Laboratories in 1954 (see episode 10).

Bardeen, Shockley and Brittain, the inventors of the transistor. 1948. WikiMedia

With these developments, which had only become possible due to quantum theory, new components were available that fulfilled the functions of electron tubes. However, they were much more robust, durable and smaller, did not require a vacuum vessel, did not need a power supply for the incandescent cathode and emitted much less heat. These properties established their triumphal march in consumer electronics and control technology, which still were based on tried-and-tested analogue circuits.

What was revolutionary about this technology, however, was that the actual functional mechanism takes place in boundary layers on the scale of atoms, subject to laws that can only be described in quantum theoretical terms. This opened up previously unimaginable dimensions of miniaturisation on the one hand and flexibilisation on the other. Its impact on power electronics will be more fully made clear in the next episode. These new possibilities led to two strands of development that were inconceivable with vacuum tubes:

  • the development of an “infosphere” based on digital electronics
  • the development of an “electrosphere” of almost lossless convertible forms of electricity based on power electronics

Whereas in the old mechanical world, process-related mechanical information processing was still largely integrated into the machines (safety valves, loom control, speed governors…), or mediated by human intervention, digitalised control and power flows are taking increasingly independent paths, but remain linkable through the common technical foundation:

Information processing and its coupling to our sensory world

In episode 5, I already briefly described the first development path, namely the development of digital technologies and, based on these, of computer science: Ever smaller transistors were built to be used as logic switches that no longer amplify analogue signals but perform abstract mathematical operations. Even the first transistors were about a thousand times smaller in their functional core than corresponding electron tubes. In the latest computer chips, they are about 10 million times smaller. The prerequisite was the production of ever purer crystals. At first, this was best achieved with germanium, but soon silicon dominated with its superior semiconductor properties. With miniaturisation, the energy consumption per circuit fell and the switching speed increased. As already shown, computer hardware technology developed from this and hence the more abstract world of programming. Thanks to the invention of command-controlled microprocessors, computer science (more precisely but more seldom called “informatics”) is concerned with the processing of information (software), largely independent of its material basis (hardware). The development in the 75 years since the first transistor was unimaginable by the previous standards of technological development: the chip in a new Apple notebook today has 20 billion transistors, each about 50 atoms wide, and processes many parallel tasks at a frequency of 3.5 billion per second (GHz).

As late as the 1970s, digital information was still being entered into computers using punched cards. Today, computers can be directly coupled to technical processes. WikiMedia

What is interesting in the context of energy technology is the possibility of computers not only processing information, but also coupling this universal information processing directly to energy- or material-related technical processes without human mediation. This was made possible by the fact that transistors (unlike pencils, books, slide rules) operate with electrical energy and can process both analogue and digital signals:

When microprocessors made it possible to calculate faster and more cheaply, it became appealing to convert analogue measured values into numbers and to evaluate them digitally. Since the 1980s, for example, sounds have been processed mainly digitally: The output voltage of a microphone, which rapidly changes with the sound vibrations, is measured in the smallest steps (e.g. the sampling frequency for CDs is 44,1 kHz) with an analogue-digital converter, and thus converted into a sequence of numerical values that are expressed “digitally”, i.e. in the binary number system (which only knows zeros and ones). In this form, they can be efficiently stored and processed by information technology. In a similar way, all conceivable measured variables and signals are now converted into codes that can be processed by information technology. The variety of sensors is constantly increasing.

Conversely, the results of digital processing can be output as increasingly diverse analogue quantities in our real, material world: As sounds, images, mechanical movement, heat, radiation… For this purpose, the digital signals are first translated into electric current, which can then effect quite real changes in our sensually perceptible world with energy converters (loudspeakers, screens, spotlights, motors, heaters). This is where the second strand of development that has emerged from semiconductor technology enters the picture: semiconductor-based power electronics.

Power electronics with semiconductors

The development of power electronics with semiconductor boundary layers progressed much more slowly than that of microelectronics. Firstly, there was a technical reason for this: undesired impurities in the semiconductor crystals have much more serious consequences than in tiny logic transistors because of the size of the components and the high voltages. It took several years before crystals of sufficient purity could be produced. Probably more important was the second, economic reason: power electronics received significantly less investment for many decades. In microelectronics, companies from communications engineering (Bell, Telefunken, RCA, CSF, Sony…) and new companies (Fairchild, National Semiconductor, Intel…) created entirely new markets with fascinating prospects. In 1962, 250 million transistors were produced in the USA alone, overtaking electron tubes for the first time. The management of high voltage applications, by contrast, was in the hands of a few large established companies (General Electric, Brown-Boveri, ASEA, Siemens, Toshiba…), which only produced small quantities. They expected only modest advantages from the new technologies, but it was they who set up new working groups to further develop power electronics with semiconductors. Actually, some of them expanded into signal electronics.

In 1952, General Electric presented the first commercial germanium diodes, with which rectifiers could be built for outputs of 25 kW. Soon, much higher outputs were achievable with silicon. From 1957, locomotives with silicon rectifiers were built in several countries, from 1962 in series production, and from 1965 silicon diodes were superior to the previous technologies in all areas. But diodes could not control currents.

In 1957, ten years after the invention of the transistor, the first thyristor was built, which — just like the mercury ignitron — could control the power via the delayed start of the current flow in each half-wave of the rectified alternating current (“phase angle control” , see illustration above). One year later, it was already possible to switch 5 kW. In 1967, the first locomotive with thyristors was delivered. In 1967–70, the first high-voltage direct current transmission (HVDC, more on this later) in Sweden was equipped with them.

Mercury-vapour rectifier and the first thyristors in the converter of the first HVDC line, Gotland/Sweden 1970. ABB

Thus, in the sixties and seventies, the semiconductor technology of silicon diodes and thyristors largely replaced the mercury technology of mercury diodes, thyatrons and ignitrons. Semiconductor technology made it possible to build cheaper, smaller and more robust components for the same tasks. This made it possible to develop increasingly sophisticated circuits. But the areas in which power electronics were used remained the same: Rectification and power control by phase control.

The great leap to the universal power converter

It took many years until transistors, which experienced an unprecedented boom as key components in digital logic circuits and in analogue signal amplifiers, were also available for power electronics. When different types of power transistors (first MOSFETS, then IGBTS, as well as other variants) became available for ever higher power levels from the mid-1970s onwards, this expanded the role of power electronics in a fundamental way.

High-power transistors made it possible to switch currents of several kilowatts extremely quickly and to control this switching process with digital signals. For very high voltages, they were also combined with thyristor variants. More and more sophisticated semiconductor components were developed for different applications, whose properties were constantly improved and optimised with varying priorities: high reverse voltage / high forward current / low forward resistance / high switching frequency / small size / low costs.

Inverter output: Sine wave composed of rectangles.. ResearchGate

This finally made it possible not only to convert alternating current (AC) into direct current (DC), but also to generate almost sinusoidal alternating current from direct current, and to do so at frequencies that could be freely selected within ever wider limits: To do this, a sequence of high-frequency square-wave pulses of different heights is combined to form a sine wave — very similar to the digital sampling and reproduction of sound frequencies. In short, the direct current is finely chopped up and reassembled differently. This made the following new functions available for ever larger power and frequency ranges purely on a semiconductor basis:

  • digital controllable switches
  • Rectifiers with digitally controllable power (AC/DC inverter)
  • Converters with digitally controllable frequency and power (DC/AC inverter)

These functions can be combined to form various power converters that allow all essential parameters of electrical current (current type, voltage, amperage, frequency) to be adapted to the respective requirements in a digitally controlled manner with very low losses. Frequency conversion, for example, can be achieved by first converting the alternating current into direct current and then back into an alternating current of a different frequency. This allows electrical systems to be designed efficiently and flexibly up to an unprecedented degree.

The little-known miracles of high and controllable frequencies

Probably the most important innovation of power transistors is that the frequency of alternating current becomes a key property: Firstly, the frequency can be raised and lowered very widely, even at high power levels, and secondly, these frequencies can be controlled digitally. This is particularly important wherever electric current is converted into magnetic fields. I will highlight two applications here where this has particularly far-reaching consequences: Transformers and electric motors.

Higher frequencies allow miniaturisation (©own/TI)

Transformers up to the height of a house have played a central role in electrical supply networks since their invention and are an important cost factor (see episode 12). They can step up AC tension (voltage) and simultaneously step down current (amperage) — and vice versa — to reduce line losses (which depend on amperage) in high-voltage lines (up to 1 million Volts). The transformer stores the amount of energy of an electrical oscillation period in a magnetic field before it releases it again. This is why higher power (1 Watt = 1 Volt x 1 Ampere) requires larger transformers.

This is where power electronics come in: The higher the frequency (oscillations per second) of the alternating current, the lower the energy content of an oscillation for a given load, and a smaller transformer is sufficient. If the alternating current (AC) oscillates at 50 Kilohertz (kHz) instead of 50 Hertz, the energy stored magnetically in the transformer is only one thousandth. This means that the size of not only the transformer but also the capacitors and coils used in the circuits can be drastically reduced.

With higher frequencies, much smaller motors can be built. Today, frequencies of up to 40 kHz are used for motors ©Gaoyouang Ouyang

In electric motors, the electrically generated magnetic field must change depending on the position of the moving magnets. To achieve this, the polarity of the current in the classic DC motor is reversed mechanically by means of a sliding contact. In the much more efficient three-phase motor (see episode 12), the change is made by the oscillations of the three-phase alternating current. With a direct supply from the mains, it is thus fixed to a speed determined by the mains frequency. With power electronics, however, the frequency and thus the speed of the motor and its power can now be controlled continuously and precisely — right down to precise positioning. (With generators, the same mechanism works in reverse: the current oscillation does not rush ahead of the movement to accelerate it, but lags behind to slow it down). In addition, the required motor size decreases with high frequencies (analogous to the transformer). In practice, as we will see, both effects together enable a considerable flexibility and increase in efficiency in the conversion of electrical into mechanical energy — and vice versa.

Compared to previous technologies, the use of semiconductor power electronics thus not only enables almost limitless flexibility in converting among different types of electricity, but also opens up entirely new possibilities for drastic miniaturisation and significant improvements in efficiency.

With the development of power transistors in the nineties, a technology became available that allows an extremely flexible use of electrical energy with ever smaller devices that are completely independent of mechanical devices and can be controlled digitally. Without them, the transition away from fossil fuels would hardly be conceivable. In the next episode, we will see how these innovations have since had a real world impact. Especially since the turn of the millennium, the development, miniaturisation and spread of power electronics has undergone a breathtaking acceleration. The generation, transport, storage and use of electricity are thus becoming much more flexible and efficient. As a result, electricity is becoming even more versatile and is developing into a truly universal energy.

Originally published on sustainablestrategies.substack.com on August 31, 2023

--

--

R Schleicher-Tappeser

SUSTAINABLE STRATEGIES. Writes about Technology and Society: Based in Berlin. Five decades of experience in energy, transport, climate, innovation policies.