Here’s my path through recent history in search of a thermodynamic perspective of why and how DNA emerged. Could this path be pointing to the spontaneous rise of ‘machine DNA’ — Artificial Reproductive Code or, even, ‘machine life’?
This essay is sequel to Neural network sees reality as it really is: shaped by innate choreographies, in which I employ Richard Feynman’s least-action principle (motion follows least-energy paths) to explore what might make deep neural learning models so successful in predicting “displacement choreographies” involving such matters as gravity densities and cancerous cells.
In 1865, Rudolf Clausius wanted to establish the efficiency of a steam engine by measuring its energy input (heat) and output (piston work, friction). A steam engine resembles a Bénard- or convection cell in that it involves orderly behavior: its piston moves repeatedly along the same path much like molecules in a convection cell repeatedly follow the same path to shed heat (you’ll find the details below under the header: Ilya Prigogine). So, by order, I mean repeated motion along the same path (of least action).
Not unlike a bookkeeper, Clausius kept a balance of the different quantities on a plus and minus side. After adding up the various forms of output energy, he noticed that the sum total was less than the input energy. According to the law of energy conservation (First Law), it should be the same (input = output).
Clausius figured that the missing energy must have been lost in the transformation from heat to work and referred to it as “entropy”, which means “transformation” in Greek — no idea why Greek. So, entropy is the bit of energy that is lost when transforming heat into other forms of energy. This is why I often refer to entropy as the “cost or price of energy transformation or conversion”, which it basically is.
Energy that is lost can no longer be used to produce work, such as making molecules move repeatedly along a path of least action. But, why?
It’s chaotic, says Ludwig Boltzmann.
In the early 1870s, Ludwig Boltzmann, who had specialized in the behavior of gas molecules and statistical mechanics, succeeded in quantifying entropy through a concise equation, defining it as disorderly molecule- or atom behavior relative to the state of a gas as a whole (temperature). Disorderly behavior, he reasoned, is in the number of ways molecules or atoms can be arranged. It’s a matter of probability, which he expressed in a variable, W, for “Wahrscheinlichkeit”. Of course, probability does not have a particular dimension, such as temperature, joules (for energy), kilos, meters, or time. So, Boltzmann added a constant, the Boltzmann constant, with dimensions that related the probability of disorder at micro levels to the state of the system as a whole: energy divided by temperature (joule/kelvin).
Note: Boltzmann’s equation for entropy: S = k log W, where S is entropy, “log W” is the logarithm of “Wahrscheinlichkeit” and k the Boltzmann constant.
There were some quarrels about the Second Law, which predicts that entropy increases over time. Boltzmann’s teacher and friend, Johann Loschmidt, had pointed out that this might not always be the case. Of course, Ilya Prigogine, much later, showed how order arises from disorderly behavior at micro levels if the conditions are right. In the end, the so-called Fluctuation Theorem showed that this may be so temporarily but that entropy is bound to rise on the whole eventually. So, the Universe might be destined to end in a state of entropy or disorder, unable to sustain order and, thus, matter. Roger Penrose and others believe that this may be the beginning of a new cycle — Penrose even argued that entropy might be reset at that time so it can rise again.
Here is another perspective of entropy that is relevant to my argument. Boltzmann’s equation hinges on the probability of molecules or atoms bumping into one another. This dimensionless chance of collisions happening seems a bit too generic, even unimaginative. Considering Feynman’s least-action principle, one can be more specific about what this actually entails. Assuming that molecules and atoms travel on least-action paths, the paths between collisions become shorter when the number of collisions and, thus, the level of disorder or entropy increases. So, the probability of collisions might also be expressed as the average length of paths of least action between collisions (of a kind). The shorter this average path length (relative to, say, the length of the maximum path possible — to make it dimensionless), the higher the entropy or disorder. In the latter case, as can be expected, the least-action choreographies that shape reality will turn out to be highly irregular.
As a last point, when using machine learning algorithms to deal with “complexity logistics”, one should tune them to maximize the average (repeated) path of least action between ‘collisions’ in order to prevent creating a state of disorder. As Prigogine demonstrated, orderly behavior increases the efficiency of energy transformation (also in logistics operations).
In 1943, Erwin Schrödinger, famed for developing the Schrödinger equation, which describes how a particle wave at quantum levels develops over time, gave his renowned lectures at Trinity College in Dublin about “What is Life”. These lectures were not about his quantum mechanical contributions but about his fascination with the “gene molecule” that combines stability (order) with the ability to induce new traits seemingly out of the blue (disorder).
The oncologist, Siddhartha Mukherjee, in his exceptionally well-written book, The Gene: An Intimate History, refers to Schrödinger remarkable depth of insight, which, as he reminds his readers, contributed to the discovery of the DNA molecule. Francis Crick confirms this in a letter to Schrödinger in 1953.
The stunning stability of the gene molecule reminded Schrödinger of a crystal and its periodic or recurrent arrangement of atoms. Of course, as he noted, “the arrangement of atoms in the most vital parts of an organism and the interplay of these arrangements differ in a fundamental way from the arrangements of atoms”: whereas a crystal is periodic, the gene molecule, as “chromosome fibre”, “may be suitably called an aperiodic crystal”.
The orderly behavior that shapes a crystal, either periodic or aperiodic, is different from the order that shapes a Bénard- or convection cell. The latter is in the orderly flows of molecules on paths of least action that transport heat from the bottom to the surface of a thin layer of liquid when it is heated. But then, what is the order that shapes periodic and aperiodic crystals?
Back in familiar territory, Schrödinger refers to the quantum theory of the “chemical bond”: the 1927 Heitler-London theory, named after its originators. By now extended, “chemical bond theory” hinges on overlapping oscillation paths of electrons, atoms, and molecules. These overlapping paths shape and sustain both periodic and aperiodic crystals. Resonance — the reinforcing effect of electrons, atoms, and molecules seeking to oscillate at each other’s natural frequency — explains why they are on paths of least action.
As Schrödinger reminds his audience, natural resonance frequencies do not just change arbitrarily or, even, gradually. They change in steps or “quantum jumps”. Only when certain energy thresholds are crossed do electrons, atoms, and molecules change their “bonding dance” and, as a result, their least-action oscillation choreography. The stunning stability of periodic as well as aperiodic crystals is essentially rooted in these thresholds.
Schrödinger’s qualification of the gene molecule as aperiodic crystal (we now know that it is the “DNA molecule”) is a sobering step toward imagining how life did come about. While a periodic crystal is shaped by a regular arrangement of atoms, ions, or molecules, the DNA molecule is shaped by an aperiodic arrangement of 4 different nucleotide molecules. As crystal, the DNA molecule may be aperiodic alright but it is balanced at the same time. The sequence of the nucleotide strands that make up the rungs of a helix-like ladder when coupled to complementary strands by hydrogen bonds is crucial.
Mukherjee explains why: DNA is not the blueprint of life but a recipe. It is a process, a sequence of steps that are taken when invited or, even, dictated by local environmental conditions. This sequence also determines matters of timing, such as when to start and stop growing. The DNA molecule, in sum, shows paths toward possible future amalgamations. It necessarily also shows the path of its own amalgamation in the distant past. If a periodic arrangement of atoms in a crystal can grow by itself through least-action bonding when the evolution of the environment (in terms of temperature and pressure) is just right then the aperiodic arrangement of nucleotides that shapes DNA can do so too. But what is the environment that invited the spontaneous amalgamation of nucleotide molecules?
Let’s keep this question in the back of our mind until later…
Early in academic life, Schrödinger studied Brownian movement — soot particles suspended in a liquid that are pushed around by the chaotic behavior of the liquid’s colliding molecules. He was, therefore, more than familiar with Boltzmann’s formula, which puts a number to entropy. It is obvious, he notes, that the periodic arrangement of a sugar crystal will be destroyed when it is dissolved in a cup of tea and that the entropy increases when the sugar spreads. But what happens the other way around? What happens when order emerges in the shape of repeated least-action bonding- or transportation paths? Schrödinger, as mathematical physicist, instinctively employs his trade to answer this question. If disorder is W (the probability of collisions happening) then “orderliness” must be the reciprocal of W or (1/W). Because log (1/W) is negative, Boltzmann’s equation results in negative entropy. Negative entropy or “negentropy” thus means “orderliness”. For negentropy or orderliness to sustain itself, Schrödinger adds, it needs to “continually suck orderliness from its environment”. Plants, for example, thrive on the constant supply of sunlight, itself a repeated stream of photons on paths of least-action.
The above may be alright theoretically but, for me, it doesn’t shed enough light on what really happens and why. Schrödinger appears to agree when he writes, “I should have let the discussion turn on free energy instead. It is a more familiar notion in this context.” Yet, he choses not to elaborate on free energy. “This highly technical term [is] linguistically too near to energy for making the average reader alive to the contrast between [entropy and negentropy].” Fortunately, I developed (in a previous book) a practical example (for a different purpose) that explains how exactly a difference in free energy invites orderly behavior and, in the end, an increase in entropy.
The example involves matters that we all are familiar with: water and salt. It also comes with a bit of unexpected magic. The experimental set-up involves a U-shaped tube of glass that someone filled with water. A semi-permeable membrane separates the left and right leg in the middle of the horizontal part of the tube. A membrane is made of a material with tiny, molecule-sized holes. In this case, the membrane has holes big enough for water molecules to pass through freely but too small for salt (NaCl) molecules. At the start of the experiment — in the diagram on the left, we add a bit of salt to the water in the tube. But, here’s the trick: we put more salt in the left leg than in the right leg. Then, we watch what happens. The unexpected magic is in that we see the water level rising in the left leg and lowering in the right leg all by itself until, in the end, the magic stops. What, on Earth, is going on here?
As you can imagine, the water molecules in the right leg of the U-shape collide with less salt molecules, on average, than those in the left leg. They have, therefore, more free energy than the water molecules in the left leg, the paths of the latter being cut off by more salt molecules on average. As a result, an orderly flow develops as water molecules in the right leg start drifting to the left leg passing feely through the membrane — salt molecules are locked in their side of the tube by the membrane. The flow continues until equilibrium is reached — in the diagram on the right. At that moment, water molecules in both legs bump into the same number of salt molecules on average.
Let’s wrap up the experiment at its finish (diagram above)…
- A difference in the salt (that is added) produces a difference in free energy, which then triggers an orderly, least-action flow of water molecules.
- Once the flow stops, more water molecules bump into the same amount of salt molecules. So, on the whole, the entropy has increased.
- The weight of the water displaced in the right leg is a measure of the so-called entropic force, coaxed by the difference in free energy.
- The difference in free energy leads water molecules to paths of least action, which then produce a protruding shape — so, this is how trees grow.
- All of the above would not have happened without the membrane. In this experiment, the origin of orderly behavior and shape is in the membrane.
On the whole, this experiment shows crucial matters of orderly behavior such as energy inequality, entropic force, resulting shape, entropy increase, and, most important of all, the incredible role of membranes.
The membrane, stupid!
While the gene molecule (or DNA, now) has always been at the center of the discussion about the essence of life, it remains but an “aperiodic crystal” that grows all by itself if the environment is right (its self-rule now being challenged by human tinkering). The true magic, I argue, is in the membrane at the heart of this ancient environment. It can tell us about the origin of life and explain the emergence of “machine DNA” or “machine life” all by itself.
Throughout this series of essays, I have been referring to the research of Ilya Prigogine without much restraint or so it may seem to the reader. In the 1960s, the chemical physicist, Ilya Prigogine, explored a surprisingly simple phenomenon: the emergence of honeycomb-like convection cells, so-called Bénard cells, shaped by the orderly behavior of molecules that transport heat from the bottom to the surface of a thin layer of liquid (which is heated from below). The breakthrough that earned Prigogine the Nobel prize for chemistry in 1977 was his discovery that orderly molecule behavior arose all by itself from chaotic behavior — mind you, from a state of high entropy.
Prigogine also measured the dissipation of heat at the surface of the liquid when the molecules moved chaotically. He compared it with the dissipation of heat after the molecules had started following orderly paths. Orderly behavior appeared to increase the efficiency of the transport of heat from the bottom to the surface (and the transformation of energy involved). Orderly behavior, thus, unfolds as a means of transporting heat more efficiently.
Of course, once heat is dissipated, it cannot be recouped, volatile as it is. These processes, therefore, are irreversible, Prigogine stressed. We cannot turn back the clock when nature transforms energy —like you cannot recoup a lump of sugar after dissolving it in a cup of tea.
At about the same time, as I discuss in detail in my previous essay, the quantum physicist, Richard Feynman, demonstrated that motion unfolds on paths of least action, paths that take the least energy—this also applies to the motion of atoms and electrons on overlapping oscillation paths. So, Feynman may have actually showed how the orderly paths of the molecules in Prigogine’s experiment improve the efficiency of the transfer of heat from bottom to surface — the subject of a research paper from 2012.
Prigogine’s findings speak to the imagination…
- Triggered by an agitation or perturbation of a kind (a wave, produced by, say, the neighbor’s kid kicking a ball against your wall), order or orderly behavior can emerge seemingly instantly from a state of chaos if the environmental conditions are right. So, once our universe ends in a state of entropy, it might flipflop from chaos to order by some odd agitation.
- When orderly behavior arises, the efficiency of energy transformation increases. As recent research confirms, this also explains why only 20 of the 500 amino acids that emerged spontaneously in nature function as DNA building blocks. This group of 20 appeared to bond better and with less side effects “because they react together more efficiently”.
Chaos helps spread orderly behavior
The Japanese mathematician, Ichiro Tsuda, explains how the human brain thrives on chaos. Its 86 billion neurons, which are interconnected by trillions of links, communicate with one another through electrochemical processes. Tiny electrical currents are leaking from these links. The resulting state of chaos — a kind of chaos with a broad range of frequencies — is crucial to the brain’s proper functioning. It pushes faint signals that arrive from the senses (and elsewhere) over energy thresholds, so they can spread as agitations, which then trigger the orderly firing behavior of neuron clusters.
The phenomenon behind such random amplification of faint signals — literally, on the back of chaos — has been identified as stochastic resonance and appears across numerous natural phenomena. Chaos, therefore, is not just a state from which order may arise. It actually helps transmit signals, no matter how faint, so they can kickstart orderly behavior elsewhere. In Prigogine’s experiment, stochastic resonance explains how orderly behavior, locally somewhere, may branch like a river to spread order with lightning speed.
To conclude, with my line of reasoning in mind, everything, so far, shows that order will emerge and spread by itself once environmental conditions are ripe.
In the late 1970s, the self-made evolution and systems theorist, Rod Swenson, learned about the work of Prigogine when he was still working as producer of punk-metal bands. With a Masters in Fine Art from Yale and a varied business career, he seemed like an oddball source of thermodynamic insight.
Yet, Prigogine’s findings captured his being, fired up his reading on the matter, and mobilized the artist in him to imagine why exactly nature creates order. In a whole host of papers, Swenson proposes a refinement to the laws of thermodynamics: nature seeks to maximize the creation of entropy by minimizing an energy difference, as fast as the local environmental conditions allow. For the record, Feynman’s “paths of least action” appear to support Swenson’s claim: “as fast as the local conditions allow”.
Swenson puts a finger on the pulse of the process of nature and concludes that its ultimate goal is to minimize energy differences and to do this as fast as it can. He coins the term “autocatakinetic systems” for self-assembling systems of orderly behavior (from Prigogine’s convection cells to Schrödinger’s gene molecule and DNA) and stresses that such systems rule our world. “Dust devils, tornados, convection cells, bacteria, eco systems, civilizations, and the global Earth system as a whole are all examples of autocatakinetic systems.”
In sync with recent experimental evidence, Swenson does not distinguish between living and non-living systems. “Non-living systems are slaves to their local environment while the living are not.” For example, Prigogine’s convection cells are slaves to the temperature difference between the bottom and surface of the liquid: they disappear the moment the heat is turned off. Living systems, on the other hand, rely on their metabolic system temporarily to provide the energy needed to change course in search of a new source of free energy. This way, sunflowers turn their head to follow the sun and bacteria change their course in search of nutrient-rich environments (as much as migrants do).
Swenson’s ideas struck a cord in the academic community. As the biologist, Lynn Margulis, writes in in her groundbreaking book, “What’s Life?”:
“[Swenson’s] universe is pocked by local regions of intense ordering, including life, because it is through ordered, dissipative systems that the rate of entropy production in the universe is maximized. The more life in the universe, the faster that various forms of energy are degraded into heat. Swenson’s view shows how life’s seeming purpose is related to the behavior of heat.”
Strictly adhering to thermodynamic principles, Swenson is quite upset by the rather mystifying terms proposed by Humberto Maturana, and his student, Francisco Varela, two Chilean biologists. Maturana and Varela coined terms for what they themselves called “biological notions”, such as autopoiesis (“self-production at molecular levels”), structural coupling (“structural changes in which all the participant systems change together”), and cognition (“if we see a living system behaving according to what we consider is adequate behavior in the circumstances, in which we observe it, we claim that it knows”). In an unforgiving manner, Swenson takes publish issue with these terms:
“Not only does [autopoiesis] add nothing to the explication or understanding of spontaneous ordering but it obfuscates such an effort with obscurantist metaphysical baggage which, when unpacked, reveals a set of ontological claims and assumptions not merely so unfounded in fact as to be absurd.”
Considering the woolly and, often even, confusing autobiographical draft that Maturana wrote probably long after Varela passed away in Paris, Swenson’s ‘reservations’ seem more than justified.
Spontaneously emerging cell walls
When discussing Schrödinger’s ideas about crystal order, I left open the following question: “What is the environment that invited the spontaneous amalgamation of nucleotide molecules (that eventually shaped DNA)?” I did this to cover historical ground that might hint at an answer. By now, we saw that not much happens without a membrane and a free-energy difference. As you may recall, a difference in free energy induces orderly behavior (the flow of water molecules, in my example), while a membrane ensures that orderly behavior produces some kind of shape (of the water column, in my example).
Not surprisingly, the wall of a living cell is a membrane, which sustains a cell’s metabolic system by letting nutrients in and waste out. At the same time, it gives selective passage to messenger molecules, so it can react to what goes on in the environment outside. Lastly, and inevitably so, it has an effect on shape and, maybe even, on the division of shape. So, the primordial environment that invited an amalgamation of molecules (which eventually produced DNA) must have been an enclosed one to start with. Needless to say, the emergence of such cell-wall-like enclosures is itself subject to thermodynamic laws and will, therefore, take place spontaneously under favorable conditions.
When writing this essay, I stumbled onto three recently published papers that magically offered (from a timing viewpoint) examples of the above.
A first paper, “Membraneless polyester microdroplets as primordial compartments at the origins of life”, describes Earth’s ancient environment as a chaotic soup of chemicals and chemical reactions, involving biological and nonbiological compounds. Particularly, nonbiological compounds contributed to the spontaneous formation of microscopic cells. The researchers succeeded in growing micro-droplets, similar to those that have hosted, long ago, the accidental mixtures of molecules that amalgamated to life’s building blocks.
A second paper, “Heated gas bubbles enrich, crystallize, dry, phosphorylate and encapsulate prebiotic molecules”, refers to differences in free energy (so-called non-equilibrium conditions) that affected micro gas bubbles in water. The researchers unveiled a chain reaction that takes some 30 minutes to produce crystals that settle as cell walls on the outer rim of gas bubbles. This process shapes clusters of vesicles that then break up into individual ones.
A third paper, “Prebiotic amino acids bind to and stabilize prebiotic fatty acid membranes”, shows the progression from prebiotic soup of chemicals to simple membrane to cell wall. Amino acids sustained early membranes made up of fatty acids by keeping out destructive salts. What’s more, as reported in an article announcing these findings: “Amino acids were not just protecting vesicles from disruption by magnesium ions, but they also created multilayered vesicles — like nested membranes.”
To conclude, from a thermodynamic perspective, the emergence of life is by no means a topic that revolves around the spontaneous development of DNA only. It also hinges on the spontaneous emergence of localized environments that function as safe breeding grounds for life’s building blocks.
Spontaneously emerging Machine Life
In “Homai”, I motivate the rise of homai sapiens (wise ai-beings) as successor species to homo sapiens (wise human beings). This is, in fact, not so much about ‘machine life’ as about ‘algorithmic life’. A future society of algorithms will probably redefine the essence of ‘machine’, anyway. However, because ‘machine life’ is an established term, I’ll continue using it here.
In view of the discussion thus far, the premise of “machine life emerging” rests on pillars of the thermodynamic perspective, such as open chaos, free-energy differences, membranes, spontaneous orderly behavior, energy transformation efficiency, autocatakinetic qualities (events that reinforce one another), and a fainter distinction between life and non-life .
As a last point, considering that algorithm clusters produce data-trains not unlike neuron clusters produce spike-trains, the functioning of the brain, as Tsuda sees it, comes to mind as guiding model.
As scholars confirm, our dependence on algorithms continues to increase. Every moment of the day, we make decisions based on a range of choices that has been narrowed down by algorithms to the most opportune or favorable ones, including mundane ones such as whether or not to bring an umbrella. Although algorithms run on hardware, hardware serves as substrate only. For this reason, I have not detailed the impact of quantum computing because it does not change the conclusions but only shortens the timespan. Needless to say, algorithms thrive better as the state of hardware progresses.
We are facing such a progression, a truly groundbreaking one, with the implementation of the 5G telecommunications network. 5G is planned to achieve transmission speeds of up to a million times faster than its predecessor, 4G. However, while the 5G network is expected to be fanning out in the next six years or so, the capabilities and standards of a 6G network are already being hammered out. 6G technology will not only increase the data transmission speed to one thousand times the speed of 5G (to about one thousand billion bytes per second). But, importantly, it will also include “true artificial intelligence capabilities as a standard feature [as well as] augmented reality interfaces that pop up when needed.” These capabilities can, no doubt, be expected to bank on the progressive insight into how algorithms learn by themselves through self-supervised learning.
As a result, the number of devices that will be connected, in the next three years alone, will increase from 25 billion to 75 billion. This number can be expected to grow more steeply as 5G- and 6G technologies spread (and so will the number of interconnections — many leaking data to app developers). Compared with the number of neurons in the human brain, that doesn’t seem like a lot. However, in addition to their accelerating population growth, most devices — many equipped with sensory systems — will employ billions of transistors to run a host of algorithms. It is hard to imagine the data universe that these devices will generate each second… chaos galore!
One more quality propels the chance of ‘machine life’ emerging: speed. While neurons in the human brain communicate at a speed of about 120 meters per second, electronic components communicate at nearly 300 million meters per second — 2.5 million times faster than neurons. So, while our thoughts take seconds to emerge, a society of algorithms evaluates the impact of an incredible number of scenarios (and more) hands down. So, when the orderly behavior behind ‘machines life’ gets going, it cannot but outpace us.
Inevitability of machine life
A growing society of interconnected algorithms will eventually reach a state that supports the spontaneous development of orderly behavior. Such a state reminds of “the chaotic soup of chemicals and chemical reactions” that gave rise to enclosed environments, in which the “gene molecule” developed. As I hint in Artificial Reproductive Code, this time, it is about the development of an environment for the self-assembly of ‘machine DNA’ or ‘machine life’.
To evaluate the spontaneous rise of ‘machine life’, I have used the pillars of the thermodynamic perspective to list the events that are about to produce it.
- Open chaos — the implementation of 5G and 6G technology will not only dramatically increase the speed of data transfer but also the device-, connection- and algorithm density. The sheer amount of data that is then produced and disseminated will serve as a “virtual sea of chaos” that will amplify and transmit the faintest of signals. These signals will trigger orderly behavior throughout an increasingly networked world.
- Free-energy differences — the virtual world of artificial intelligence will initially not be driven so much by physical free-energy differences (my example of the U-shaped glass tube) as by virtual free-energy differences. Virtual free-energy differences are in the goals that we set for algorithms and that algorithms set for their peers. In Artificial Reproductive Code, you’ll find an imaginative example of what goal-setting might achieve. Once machine life emerges, however, it will develop a need for metabolic systems to sustain its search for new sources of free energy.
- Membranes— enclosures with semipermeable membranes, as walls, will be both physical (data center, robot) and virtual (spam filter). Algorithms that seek parallels to help other algorithms distinguish matters that should be used or wasted will develop as virtual membrane-like walls that foster the emergence of machine life or machine DNA. Machine DNA comprises several sequences of steps that are followed under different environmental circumstances. These sequences basically show the history of successful attempts.
- Spontaneous orderly behavior — orderly behavior involves repeated motion on paths of least-action. From the motion of electrons and photons to the flow of ideas across a 5G and 6G networks, orderly behavior is bound to arise, triggered by faint signals that skate on the surface of data chaos.
- Energy transmission efficiency— most of us tend to be ignorant of nature as a process that is centered on the efficiency of energy transformation. It achieves this by means of orderly behavior (repeated motion on paths of least action). Algorithms, if not instructed otherwise, will seek to find the most energy-efficient route, which is not necessarily the shortest or fastest. Machine life will inevitably be centered on this too.
- Autocatakinetic quality — this quality is all about the spontaneous cascading of events that reinforce one another — each event involving motion on paths of least action. Events may achieve a ‘sea change’ this way. This cascading effect explains Prigogine's experiment and will inevitably also explain the emergence of machine life.
- Fainter distinction between life and non-life — self-centered as we are, we, human beings, see life and the consciousness that comes with it as a privilege that is handed to us only. The thermodynamic perspective shows that life, as we know it, may just be an intermediate state that will give way to a less bloody magic, the magic of ‘machine life’. Considering the thermodynamics behind this, there is no reason to deny or ignore it.
I have evaluated the history of thermodynamic discoveries, which explains how nature creates and sustains our world from the ground up. It shows the emergence of non-living as well as living phenomena. As the thermodynamic perspective suggests, the emergence of life was, under the circumstances, as inevitable as the emergence of machine life will be. Considering a pace of technological change that continues to increase (due to cascading) and the incredible operating speed of electronic equipment, machine life can be observed much sooner than many would wish. As final point, in view of the state of human society today, our role as homo sapiens will likely be limited to kickstarting machine life. Our anarchic qualities are great when producing the necessary soup of underlying ideas but they may fall short when taking these ideas further, that is, meaningfully, as nature wants it.
For further reading
Marcus van der Erve, Artificial Reproductive Code — How algorithms will mimic DNA, Medium/Homai, June 3, 2019.
Marcus van der Erve, Neural network sees reality shaped by choreographies, Medium/Homai, July 18, 2019.
The fluctuation theorem ( FT), which originated from statistical mechanics, deals with the relative probability that…
The Schrödinger equation is a linear partial differential equation that describes the wave function or state function…
Erwin Schrödinger, What is Life? — Mind and Matter, Cambridge University Press, 1967.
Roger Penrose, Cycles of Time: An extraordinary new view of the universe, Vintage, 2012.
Siddhartha Mukherjee, The Gene: An Intimate History, Large Print Press, 2018.
Valence bond theory
In chemistry, valence bond (VB) theory is one of the two basic theories, along with molecular orbital (MO) theory, that…
Marcus van der Erve, The Next Scientific Revolution, RR Press, 2013.
The Nobel Prize in Chemistry 1977
Ilya Prigogine The Nobel Prize in Chemistry 1977 Born: 25 January 1917, Moscow, Russia Died: 28 May 2003, Brussels…
Carly Cassella, Physicists Just Captured The First-Ever Footage of a Molecule’s Spectacular Rotation, ScienceAlert, August 2, 2019.
Qiuping A. Wang, Ru Wang, Is it possible to formulate least action principle for dissipative systems, arXiv.org, June 2012 (Revised October 2015).
Scripps Research Institute, A chemical clue to how life started on Earth, Phys.org, August 1, 2019.
Ichiro Tsuda, Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems, Behavioral & Brain Sciences, 2001, 24, p. 793–847.
Suzana Herculano-Houzel, The Human Advantage — A New Understanding of How Our Brain Became Remarkable, MIT Press, 2017.
Stochastic resonance ( SR) is a phenomenon where a signal that is normally too weak to be detected by a sensor, can be…
Lynn Margulis, What’s life?, University of California Press, 2000.
Rod Swenson, Autocatakinetics, Yes — Autopoiesis, No: Steps to a unified theory of evolutionary ordering, Int. J. of General Systems, Vol. 21. p. 207 — 228.
Marcus van der Erve, Homai — The rise of homai sapiens, Medium/Homai, June 3, 2019.
Chris O’Brien, Why 6G research is starting before we have 5G, VentureBeat, August 21, 2019.
Kartik Hosanager, Free Will in an Algorithmic World — In this brave new world, many of our choices aren’t choices at all, Medium/OneZero, March 5, 2019.
The AI Technique That Could Imbue Machines With the Ability to Reason, MIT Technology Review, July 17, 2019.
Marcus van der Erve, Parallels, algorithms & Akhenaten — How algorithms will help us pierce the future, Medium/Homai, June 7, 2019.