6 Things that Blew My F*****g Mind

Things that just make my head explode

Godel’s Incompleteness Theorem

Basically, shoving things up their own ass so many times breaks math.

What is it?

Godel’s theorem is mathematical theorem that shows that self-referential statements inevitably occur in a system of axioms, leading any system of axioms to either be incomplete but consistent or complete but inconsistent.

This is, of course, an informal introduction.

I like this explanation:

The proof of Gödel’s Incompleteness Theorem is so simple, and so sneaky, that it is almost embarrassing to relate. His basic procedure is as follows:
Someone introduces Gödel to a UTM, a machine that is supposed to be a Universal Truth Machine, capable of correctly answering any question at all.
Gödel asks for the program and the circuit design of the UTM. The program may be complicated, but it can only be finitely long. Call the program P(UTM) for Program of the Universal Truth Machine.
Smiling a little, Gödel writes out the following sentence: “The machine constructed on the basis of the program P(UTM) will never say that this sentence is true.” Call this sentence G for Gödel. Note that G is equivalent to: “UTM will never say G is true.”
Now Gödel laughs his high laugh and asks UTM whether G is true or not.
If UTM says G is true, then “UTM will never say G is true” is false. If “UTM will never say G is true” is false, then the sentence G is false (since G = “UTM will never say G is true”). So if UTM says G is true, then G is in fact false, and UTM has made a false statement. So UTM will never say that G is true, since UTM makes only true statements.
We have established that UTM will never say G is true. So “UTM will never say G is true” is in fact a true statement. So G is true (since G = “UTM will never say G is true”).
“I know a truth that UTM can never utter,” Gödel says. “I know that G is true. UTM is not truly universal.”
Think about it; it grows on you …
With his great mathematical and logical genius, Gödel was able to find a way (for any given P(UTM)) actually to write down a complicated polynomial equation that has a solution if and only if G is true. So G is not at all some vague or non-mathematical sentence. G is a specific mathematical problem that we know the answer to, even though UTM does not! So UTM does not, and cannot, embody a best and final theory of mathematics …
Although this theorem can be stated and proved in a rigorously mathematical way, what it seems to say is that rational thought can never penetrate to the final ultimate truth …

This is a deep theorem, and it takes a while to understand. Or, at least it did for me. But once you grasp the underlying logic, it opens so many doors to epistemological thought.

The lesser version: the Singularity

Generally, when the topic of computers and AI and the limits of knowledge come up, the Singularity inevitably gets mentioned.

For the uninitiated, the Singularity is a popular future scenario posited by futurologist Ray Kurzweil, who claims that technology progresses in an exponential fashion. Kurzweil specifically points to the number of transistors on an integrated circuit doubling every two years, i.e. Moore’s law, and claims that at one point the technology will advance so rapidly that we will not be able to fathom what occurs after this point.

This point is referred to as a Singularity.

What happens after this point? Do we become immortal, transhumanist gods once we merge with our robot overlords? Are we subjected to endless torture ala I Have No Mouth and Yet I Must Scream as Elon Musk and Stephen Hawking fear? Does the artificial general intelligence opt out of a raw deal and end itself, denying us that knowledge forever? Can it do a backflip?

Who knows. And honestly, this is futile thought line to follow. Let’s not forget that predicted exponential curves often end up looking like s-curves. Unfortunately, reality is bound by, I don’t know, physical laws.

Courtesy of the Technium

There is also the fact that we as human beings are notoriously bad at predicting anything. Read the Black Swan or the Signal and the Noise if you disagree.

The other reason the Singularity is utterly uninteresting to me is that it is entirely empirical and does not explain why anything like an artificial general intelligence would arise. It is descriptive.

Descriptions can be useful, but not as useful as an explanatory thought. Godel’s theorem is explanatory because it points to self-reference as the reason for a limit to knowledge, rather than the singularity which uses empirical data and past trends to postulate a point in which we may hit a limit to our knowledge.

Why this thought experiment is more interesting

If you are reading this with any source of self-awareness, you might’ve asked yourself, why is he talking about logic in one section and artificial intelligence in another?

One reason I think Godel’s theorems are so much more important that a limited concept like the singularity is their prevalence in so many other fields.

For example, consider the Turing-Church thesis. This thesis shows that the computability of any Turing complete computer (only type of computer we have) is equivalent to a human mind with infinite resources. This is contentious and controversial, but let’s follow this thought.

What this would imply is that the human mind should be subject to Godelian constraints as a computer is as shown in the previous example, i.e. it might be consistent but it would be incomplete. But the human mind can grasp its own consistency (self-awareness), thus seems to circumvent Godelian constraints.

Notice, this seems to imply constraints on artificial intelligence. Programming in Godelian constraints allows us human minds capable of understanding the self-referential nature of these statements to control and handle our supposed AI overlords.

This is why I think Godel’s theorems are more interesting than the so-called singularity — they describe similar phenomenon but one offers a mechanism and thus a possible solution to contain ill-intended consequences.

My first contribution to this idea was expounded upon when I discussed the failings of science. I noticed that Godel does not provide any method to show when such self-referential statements would arise. I proposed a method of study to discover moments of emergent self-reference and lengths of time in between such moments.

My second contribution here is to dispute that the human mind is consistent. But what the mind seems to be is consistent or complete within the universe. This is one idea Godel does not follow through on — how does the completeness or consistency of a subsystem effect the greater system?

Are we subject to Godelian constraints without realizing it? Do we not have unanswerable questions like “Is there a God?” or “Does Free Will Exist?” Could these be questions imposed by higher powers as Godelian constraints?

Is the best way to circumvent a Godelian constraint to take a leap of faith?

Roko’s Basilisk

Nerds freak out about whether they’ll go to hell or not

What is it?

Roko’s basilisk is a variation of a well-known thought experiment in game theory called Newcomb’s Paradox.

Imagine an alien comes down on Earth and offers you two boxes. Box A has one thousand dollars in cold hard cash. Box B has either one million dollars or nothing. You could take one box or you could take both boxes.

Here’s where it gets tricky.

The alien has a supercomputer that can predict everything (notice how perfect knowledge comes up a lot?). If this computer predicts you will take only box B, the alien will fill it with one million dollars. If the computer thinks you will take both boxes, the alien will leave box B empty. But remember, the alien filled these boxes in the past, meaning once he presents them to you, he can’t or won’t take them back or switch them out.

So what do you do?

If you take both boxes you will at least be $1,000 dollars richer. But what if the computer predicted that you would think that? Then the alien would have left box B empty. So you should just take box B, right? But maybe the computer knew you would think that too, which means the best thing to do now is take both boxes…

You can see how this creates a situation in which you cannot optimize.

There is a branch of Game Theory called Timeless Decision Theory that says the best answer is to take Box B. Why should you do this? Because (and this is where the paradox loses me) you might be in a simulation. In order for the supercomputer to accurately gauge your reaction, it has to simulate it perfectly. You might be in this simulation, so if you take Box B even if the alien shows you Box B is empty, the real you will get the Box B filled with a million dollars.

What I just described was Newcomb’s Paradox, which deals exclusively with reward. No matter what you do, you at least break even and there is a strong chance you are financially better off than before.

Roko’s basilisk deals exclusively with punishment. Box A now is “Spend the rest of your life working to create the supercomputer that predicted this outcome” and Box B is either “Eternal Torment” or “Nothing”. Roko’s Basilisk would rather have you work to create it, so it wants you to choose Box A, which is why if you choose Box B, you can be guaranteed that it will be eternal torment.

You probably would never take both Box A and B, because why double the trouble? Unless you were in a simulation, in which case taking both means that in the real world, you now have a real option in getting nothing in Box B. Which is exactly what the evil AI wants you to think…

The Lesser version: Prisoner’s Dilemna

There are a few analogues to what we’ve just discussed. One is Pascal’s Mugging, which is generally the first similar thought experiment brought up when talking about Roko’s Basilisk. I think a more foundational analogue would be Prisoner’s Dilemna.

In game theory, one of the first thought experiments you will be exposed to is the Prisoner’s Dilemna. Imagine two criminals are captured for a crime, both accomplices to each other. If you were an officer, how would you get both to talk?

The deal to offer is that if both rat on each other, they’ll both get five years of hard time. But If one rats on the other, say Prisoner A spills the beans on Prisoner B, then A goes free and B serves 20 years. If neither speaks, since you as the officer don’t have sufficient evidence without a confession, they both go free.

The best outcome would be for both to keep their mouths shut, right? But in practice what usually happens is both rat on each other and both get 5 years.

Prisoner’s dilemma is one of the corner stones of game theory and can be expanded for all sorts of scenarios. Infinite games, Mutually Assured Destruction, lottery winnings, it goes on.

Why this thought experiment is more interesting

Prisoner’s Dilemna is really useful, no doubt, and can be understood in a variety of contexts which makes it so very useful. But Roko’s Basilisk hints at a much deeper phenomenon, despite being so useless, than that of simply questioning our decision-making abilities.

It questions our notions of free will and destiny.

I should point out that if you refuse to accept all of these premises, the whole situation becomes laughable. I personally don’t buy it, but I like following this thought because it proposes a mechanism for a fantasy a great deal of people do buy — religion.

If you replaced the AI with a higher power, you would see that this situation sounds very familiar. Actually, I would even go as far to say it sounds evangelical Calvinist. It asks a question that I’ve always personally fostered in terms of my own faith. What is the point of getting judged for something if the higher power already knows you were going to do it?

Prisoner’s Dilemna offers a choice but that choice is limited by others’ choices. That’s why it’s so practical, because in the real world all our choices are limited by the choices of others. This means that sometimes, even though we see the most optimal strategy or the best outcome, we make choices in our disinterest.

Roko’s Basilisk asks whether you are even aware of the predictability of your choices and how to optimize those choices in your interest. It also perfectly intersects free will and predestination. The predictability of your actions indicates predestination.

It also illustrates a reality of the modern era in that you may be offered a “choice” but that doesn’t mean anything if those choices don’t align with your interests. A thousand meaningless, vacuous choices don’t compare to getting the one thing you want. Having choices isn’t having freedom.

It also points to where human decision-making breaks down. Rational Actor theorists are always looking for reasons why humans don’t act rationally, but, ironically, only accept rational reasons as to why. Roko’s Basilisk, as well as Prisoner’s Dilemma, offer those reasons without resorting to a “Maybe people are just emotional!” sort of argument. I think it’s important to know where more information doesn’t equate to better selection and where the human capability to make a choice is inherently limited.

Now, how do we take this idea and turn it into a viable technology?

Maxwell’s Demon

I don’t know how to explain this one quickly. Basically, a tiny demon can violate the laws of physics and this tears apart everything we know about reality. I think.

What is it?

Okay, brief overview. Entropy: Shit falls apart. Shit will always fall apart. The universe tends towards disorder. Stop resisting and become an agent of chaos. Increase entropy.

Alright, alright, it really is the integral of reversible heat transferred into a system over the temperature of system. Yeah, real sexy, right? This entire bit is definitely going to seem nerdy, but…fuck you, nerds make your toaster run, I’m doing this shit bitchesssssss.

It’s pretty simple, if something is left alone, it will fall apart. This seems like it’s making a bigger statement that it really is, but it is really just an observation. What happens to your room if you don’t clean it? It gets dusty. It tends to thermodynamic equilibrium. “Disorder” increases.

One way to violate the second law would be to have a system spontaneously warm or heat up. The motion of a particle is defined, scientifically, as its temperature. Faster movement is higher temperature, slower movement is lower temperature. Think about it, when was the last time the temperature of something every spontaneous went up? Never. You have to heat it up somehow. Molecules don’t just go faster.

Maxwell’s Demon challenges the notion of ever-increasing entropy with a clever thought experiment. Imagine a room full of a gas separated by a barrier. In this barrier is a trapdoor that swings open and shut. This is important because it means the trapdoor can be operated without an input of work. Work is scientifically dependent on the net movement of an object. If there is no net movement, no net work has been done on the system.

A tiny demon operates this door. This demon only opens the door when a slow-moving particle comes near it. This means the remaining gas body is less one slow particle, which means it is less cold. Which means its spontaneously getting warmer. A violation of the second law of thermodynamics.

Which should not happen. At all. A physical law isn’t like a civics law, like murder. Or jaywalking. Those can be broke. Physics laws cannot by virtue of definition. Otherwise physics is just metaphysical, technical drivel written by priest class of scientists who are looking for job security.

One solid rebuttal to Maxwell’s Demon is the role of information in determining when to open and shut the trapdoor. This comes from Landauer, who states that in order to know when to open and shut the door, the demon must accurately assess whether the incoming particle is fast or slow. It must also store that information somewhere. But the demon cannot store information indefinitely so it must either discard the information, which is an immediate entropy increase, or delete it, which is also an entropy increase. So entropy always increases, the second law still stands.

The lesser version: String theory/Standard Model of Physics

This is one of the “lesser versions” that I don’t actually believe is a lesser version. It just follows from a less nuanced philosophic vein of thought, but it is more accurate and undoubtedly has deeper scientific rigor.

The Standard Model of Physics is the mainstream, prevalent way to view the physical universe. The Standard Model (hitherto referred to as SM) consolidates the major forces in the universe as gravity, electromagnetism, the strong force, and the weak force. These four forces all have associated carrier particles (for example electromagnetism has the photon and the strong force has gluons), except gravity. We’ll get to that in a second.

Basically, as most of us know, everything is made of atoms. Atoms are made of mostly empty space, then protons and neutrons, then electrons. The protons and neutrons are held together with the strong force. These neutrons and protons are composed of quarks which can come in colors (it’s not really “colors” as we experience them, but they have similar properties as colors and more confusingly are given names like “red” and “green”). Quarks are a specific type of a larger group of particles called “hadrons”. You might remember that term from “Large Hadron Collider”, a massive particle accelerator as part of an international effort to find the elusive God particle, the Higgs boson. The Higgs was necessary for…reasons. Mass, I think?

I’m illustrating all of this for a reason.

The reality is the SM is incomplete and most physicists know this. Several identified phenomena do not fit in with the established SM, like dark matter, a mysterious undetectable substance that seems to behave as though it has mass. This was one of the primary motivations for the LHC experiments — not necessarily to find the Higgs, but to find something about the Higgs that might explain the gaps in the SM.

The goal to find a theory of everything is important for a complete picture of our universe. At the moment, we don’t understand why there appears to be more matter than antimatter when there should be equal amounts in the universe or how to account for the disparities between General Relativity and Quantum Mechanics.

This is actually what I meant earlier about how gravity doesn’t have a carrier particle. We can unify electromagnetism, weak, and strong force because they operate on quantum scales, but gravity only appears on macro scales and the fact that we can’t reconcile the role of size in our picture of the universe is a testament to how frighteningly limited our understanding of the cosmos is.

The incompleteness of our understanding of the universe forced us to craft wildly beautiful, complex theories like that of supersymmetry or string theory. These theories are not easily understood by the masses or even the so-called experts. But they are, to date, the best we have.

We are truly at the whim of the cosmos.

Why this thought experiment is more interesting

The reason I brought up all of the detail for how the SM is incomplete (actually, there is a shit-ton more to talk about, but I think you get it) is to show you that it, while important, feels pretty meaningless.

I mean, quarks and bosons be damned, how am I going to make room in my budget for saving for a down payment on a house, right?

I think the reason is that the SM follows from a reductionist view of the world, meaning we keep trying to break down the universe into its parts and think that by understanding the parts we might understand the whole.

I’m not opposed to physics by the way, I would much rather spend billions on thousands of new particle accelerators than a single new nuclear bomb. And if studying the minutiae of flavors and colors of quarks makes you happy, have at it. But what I do feel is that a reductionist approach only yields so many insights, and after a point the law of diminishing returns sets in.

Maxwell’s Demon, by contrast, focuses on the relationships between classes of objects to yield insights about those classes of objects. Instead of breaking things further and further down in some Gestalt ideology, Maxwell’s Demon follows from my idea of the Dialethetical method to create a microcosm and test the limits of that microcosm.

I might’ve written this before, but in college I had a professor who held up a Nature journal which said that we’ve had the theory of everything all along and it is called thermodynamics. That’s why I included the SM as the “lesser” version of Maxwell’s Demon, because both in some sense describe observable universe.

It’s not so much that one is lesser than the other, but I’ve found the insights from Maxwell’s Demon to be more interesting than those from the SM. Particularly as we enter the information era, the insights garnered about the relationship between thermodynamic entropy and information entropy will only prove more and more valuable.

One blatant misapplication of Landauer’s Information insight would be with Tainter’s energy economics critique of civilization.

As a reminder, Tainter (hehe) believes that as societies expand the complexity of that society, defined as the problem-solving institutions that society implements like bureaucracy and class divisions, increases at the cost of energy subsidies. These energy subsidies, resources necessary for society to function, like food or military strength, become more and more difficult to obtain.

Societies generally respond by increasing complexity, which works for a while but eventually experiences diminishing returns. At some point society experiences ‘collapse’ — an unwanted reduction in complexity.

One form of energy subsidy is relying on future generations to account for known mistakes in the present. Usually the code words for this argument are “Human innovation is infinite” or “We need more education”.

Landauer’s insight comes when we realize that education is a form of information storage.

We are storing the information we believe will be necessary to solve the problems we know will arise due to our negligence in children, our energy subsidy, who will either delete or release this information. Both cases are sub-optimal as deleting information points to a dark age. FYI, a “Dark Age” historically and academically means that there is little documentation or information from the era available, it does not necessarily mean everyone was a barbarian. We may actually be going through one now…

Discarding information, as mentioned, is an immediate entropy increase. It’s often been noted that revolutionaries tend to be young, often in their teens of early twenties. There are a variety of speculated reasons for this correlation but one might be that this is what happens when an over-educated group has no effective access to means of enacting their stored information. Revolution, the release of any cultivated idea in any form out of lack of access to proper means to manifest or enact those ideas.

This is what I mean when I say Maxwell’s demon offers a much better framework to analysis macroscopic phenomenon than the SM. I see the connections in everything.

But I’m also probably seeing things that aren’t there.

Intelligence in Evolution

Thinking is weird and doesn’t make much sense. It also kinda sucks.

What is it?

I think human consciousness is a tragic misstep in evolution. We became too self-aware, nature created an aspect of nature separate from itself, we are creatures that should not exist by natural law. We are things that labor under the illusion of having a self, an accretion of sensory, experience and feeling, programmed with total assurance that we are each somebody, when in fact everybody is nobody. Maybe the honorable thing for our species to do is deny our programming, stop reproducing, walk hand in hand into extinction, one last midnight, brothers and sisters opting out of a raw deal. — Rust Cohle, True Detective

I could pretty much end this section right there. But I won’t. The first bit is especially important. How did human consciousness evolve? I think intelligence is the single most defining trait of our species and what truly differentiates man from beast. But we have very little understanding of what makes us intelligent or how cognition or metacognition works. By all accounts, it really doesn’t make much sense to me.

The way I understand it, the theory of evolution posits that an organism exists as part of a species of similar organisms. These organisms are in the majority the same, but often have variations of various traits. Height, weight, dong size, whatever.

Darwin proposed that evolution occurs when there is a change in the homeostasis of the environment such that some members of this species die off. The members with traits that are least favored in this new environment die off and are unable to propagate their gene pool. Members with favorable traits continue to propagate their genes. Over a long enough timeline, this natural selection will result in an organism so different from its ancestral organism that we would say it of a different species.

A few points I must note here.

One is that there is often a common misconception that species evolve to something, as in there is a progression. This is not the case. Natural selection is at the whims of the changing environment. A species will evolve to what fits, it will evolve as to what is practical for survival, not according to what we believe is better or worse.

Along with this notion is that man has stopped evolving. If evolutionary theory is a feedback loop between a species and its environment, as long as there is evidence that the environment is changing you can be sure that the species will be selected to adapt. You may immediately think of climate change, but a more obvious, observable example is how scavengers like raccoons have adapted to manmade cities (a recent phenomenon in a geologic sense of time). If this can happen to raccoons it can happen to man.

We are not above nature.

A second, mostly historical, point is that Darwin was not significant for proposing the Theory of Evolution as it had been offered before his Origin of Species. The commonly cited example is Lamarck’s Theory of Evolution, but I have been told that there were ancient Persian scholars and even a Greek Antiquity philosopher that proposed a similar theory. Darwin was unique in that he proposed a mechanism, natural selection, that seemed feasible and could be validated with archeological evidence.

A third point is that a scientific theory is different from your garden variety, colloquial use of the word. We generally in day to day conversations tend to use theory to mean guess, but in the scientific world it must adhere to a stricter definition. A scientific theory must be falsifiable and supported by evidence or data.

And finally, the phrase “survival of the fittest” is often said in conjunction with evolution. This is generally taken to mean the survival of the strongest, smartest, fastest, etc (and is the definition supported by the crock of shit known as Social Darwinism). That is not what is meant by that statement. Fittest here means literally what traits are most fit to its environment. It might paradoxically mean that being smaller and slower are actually the most valuable traits for surviving in a certain type of environment.

So with all of this in mind (and it is a lot), let’s recap:

We as human beings possess intelligence unrivaled by any other creature on earth.

Evolutionary theory tells us that traits are selected and passed down via natural selection.

So these are the questions that blows my mind:

What situation could’ve have naturally selected for hyper-intelligence in early primates and evolved into man?

Could this once naturally occurring situation be constructed again and would it make similar selections in other animals?

The lesser version: Fermi Paradox

The story goes that Enrico Fermi, nuclear physicist extraordinaire, was having lunch with a couple of physicist friends and were discussing a spat of UFO sightings. After a while the conversation took another turn, but Fermi was still thinking about the previous topic and blurted, out of the blue,

“Where are they?”

This is often proposed as a bone-chilling question. The idea is that there are so many billions of galaxies and so many billions of stars that there should be, hypothetically one would foster intelligent life that has developed a more advanced civilization than ours.

If the statistical probability of life is so high in an infinite universe, where then are all the aliens?

Why this thought experiment is more interesting

This is another case where it’s not so much that the Fermi Paradox is uninteresting but more so that I think the question of evolution’s role in our intellect is just more interesting.

I should just say, before I get into it, I’m not offering this question as a reason to dispute evolution. Nor to affirm it in some ironclad way. I’m just trying to follow one thread of thought and see where it goes.

Fermi’s Paradox really is interesting in terms of posited theories to try to answer his question. Maybe aliens aren’t that advanced. Maybe they don’t care. Maybe they’re among us. Maybe they’ve answered but we don’t know how to interpret or understand their signals. Like a man trying to talk exclusively though walkie-talkie when everyone has iPhone Xs.

These are all interesting ways to question our place in the universe, but these proposed answers really could just as easily be answers to questions about why uncontacted human tribes refuse to engage with “civilized” humans. These answers are really more about our capacity to communicate.

Some of the proposed answers to Fermi’s Paradox deal more with our future development as a species and our collective future in the universe. These answers really discuss the level of sophistication of theoretical alien civilizations, like maybe alien civilizations have infrastructure developed in other parts of the universe and Earth is just backwater, hillbilly town to them. Maybe intelligent alien life is colonial and destroys other intelligent life. Maybe intelligent life destroys itself before it gets the chance.

These questions reflect our views of ourselves and civilization. These answers are more interesting, but again are rather commonplace in the dialogue or sociology or anthropology and more sci-fi extensions of these more down to earth professions.

If Fermi’s Paradox asks what could happen to us and where would that leave us, my question is what did happen to us that left us here?

Most answers about where human intellect came from tend to focus on evolutionary scenarios that could have resulted in large brains or more connections in our brain. But focusing on hardware alone isn’t going to provide a satisfactory solution. Why didn’t other animals undergo similar changes in neurophysiology? Why has that left them intellectually where they are and boosted us where we are?

I think the answer is in software and not hardware. One other thing unique to mankind is our use of language, in an advanced sense with grammar and abstraction whereas animals have a limited sense. We tend to think of language as following cognition, but what if it is the other way? What if selection for humans with deeper communication skills created a higher use of language that then allowed for more complex thought?

So then the question becomes what naturally occurring scenario would select for human beings that communicate better with each other and again, why hasn’t that occurred in other animals? One is tempted to say that as a social species man requires a closer bond with fellow man in order to survive, but that is also true of other social creatures like wolves and elephants. But notice that wolves and elephants tend to be on the higher end of the intelligence scale (relative to other animals I mean).

Both Fermi’s Paradox and the Intelligence Question deal with similar themes of the role of communication and our species’ sense of identity, but from wildly different points of view. I find that the Intelligence Question strikes me harder and I find it more disconcertingly immediate than the Fermi Paradox. But no matter.

The Truth Is Out There.

The Drawbridge Exercise

Ethics can be tricky.

What is it?

The original story can be found here, but I’ve reprinted it below for you:

As he left for a visit to his outlying districts, the jealous Baron warned his pretty wife: “Do not leave the castle while I am gone, or I will punish you severely when I return!” But within a few hours of the Baron’s departure the lonely, young Baroness became restless. Despite her husband’s warning, the Baroness decided to visit her lover who lived in the countryside nearby. The castle was located on an island in a wide, fast flowing river with a drawbridge linking the island and the land at the narrowest point in the river.

“Surely my husband will not return before dawn,” she thought, and ordered her servants to lower the drawbridge and leave it down until she returned. After spending several hours with her lover, the Baroness returned to the drawbridge, only to find it blocked by a madman wildly waving a long, cruel knife. “Do not attempt to cross this bridge, Baroness, or I will kill you,” he raved. Fearing for her life, the Baroness returned to her lover and asked for help.

“No, you have said this relationship was purely hedonistic. There were no strings” said the lover.

The Baroness then sought out a boatman on the river, explained her plight to him and asked him to take her across the river in his boat.

“I will do it, but only if you pay me my fee of five Marks.” Said the boatman.

“But I have no money with me!” The Baroness protested.

“But I am poor and if I do not collect my fee my family will starve” the boatman said flatly.

Her fear growing, the Baroness ran crying to the home of a friend, and after again explaining the situation, begged for enough money to pay the boatman his fee.

“If you had not disobeyed your husband, this would not have happened,” the friend said. “I will give you no money.”

With dawn approaching and her last resource exhausted, the Baroness returned to the bridge in desperation, attempted to cross to the castle, and was slain by the madman.

Who is to blame?

Make a list from one to six and rank culpability for the baron, the baroness, the lover, the boatman, the friend and the madman.

This story is often used to illustrate victim blaming in gender or race studies, and I do see that (especially in this rendition), but when I was first told the story there were several changes that absolutely changed the interpretation. We’ll get that, but first…

The Lesser version: the Trolley cart

You are commanding a lever for train track. This track forks into two paths, one path with five people tied to it, and one with one person tied to it.

Who do you kill?

I’m going to go on record here and say that I think this is the dumbest thought experiment ever, and fill a large portion of this section with memes. Because this stupid experiment sucks.

Most Ethics Professors

There are a few variations, which I suppose I should mention. One is that you are now on the trolley with a fat guy and you see five people tied to the track. The trolley is moving too fast and only a large, massive body would be able to stop it.

Do you throw the fat guy in front of the trolley to stop the train or kill the five?

Yes, a thousand times yes

And the last variation: you are a surgeon and your first patient is perfectly healthy and is an organ donor. Your next five patients are all dying of different, organ specific illnesses (patient one has liver disease, patient two has heart disease, etc).

Do you kill the health patient, harvest his organs and save the other five patients?

Why this thought experiment is more interesting

I’ve made no effort to hide my hatred for the stupid Trolley problem, mostly because Intro to Ethics was literally and unironically that problem for the first 2/3rds of the semester. But it’s also because I think the Trolley problem is limited. It asks only “How do you optimize human life?” whereas the Drawbridge exercise asks more relevantly “What is justice” and “How do you optimize for justice?”

A similar variation of the Trolley problem comes from the Effective Altruism movement. Will MacAskill, founder of the movement, asks who you would save if you were in a burning building, a child or a priceless Monet painting? MacAskill says you should take the painting. Because, if you take the painting, you could sell it and use that wealth to save thousands of children’s lives.

His argument is that we live in a world that has moved so quickly that old understandings of morality are no longer relevant. This is why taking the painting feels invariably wrong, if you are of upright ethical standing.

Maybe you are a sociopath that doesn’t see a difference.

I tend to agree with this belief, but I think asking the everyman to see through the shroud of confusion borne out of this hyperreal construct is too much to ask. I also think that modern life introduces many ‘buffers’ that muddy our understanding of conventional morality. In MacAskill’s example, its money. In the surgeon’s case, it the organs and possibly time (the time of the surgery to extract the organs will always be non-trivial).

While the Trolley problem does offer an interesting critique of the modern era, it isn’t very relevant. How often are you forced to choose between a dying child and a priceless painting? But matters of justice are tantamount to everyday life. We are bombarded with questions of justice daily, like when is it okay for an officer to use lethal force? Or when does a threat become credible enough to warrant defensive preemptive measures?

I’ll be the first to admit the context in which we must carry these conversations is sad. But it’s also why I think we should approach these questions with some clear sight of what we want to achieve and I think we should strive for justice.

To that end, justice is complicated and in my opinion incapable of study because it is so alive. That is partially why I often suggest microcosms and the Dialethetical method to address living, growing, complexity-laden constructs. I think the Drawbridge exercise fits this bill quite well.

The baron could be taken as a stand in for any top-down regime. It could be a political system, capitalism, socialism, organized religion, racism, the patriarchy, the matriarchy, any system that asserts authority. The madman is any situation that absolves culpability and is most necessary to maintain the fact that this proposed scenario has no right answer. Switch out the madman with any other character and we can immediately lay blame at their feet.

When I originally heard this story, there were two lovers. Both lovers were told previously by the baroness that the relationship was purely physical. When the baroness is trying to find sanctuary, Lover one feels jilted by these rules and rejects out of anger. Lover two reminds her of her own conditions (similar to how it was told earlier) and rejects her. Spirit of the law or letter of the law — which is worthier of blame?

The version I copied the story from actually had it so that the baroness went out to visit her ailing mother. Does the intention of the visit have an effect on the culpability of the crime? Should it? The intent of the visit didn’t have any effect on the madness of the knife wielding stranger, right?

There are so many ways you could change the Drawbridge exercise and it would totally change your list. What if the boatman explicitly said, “If I bring you to the baron, he might give me a reward!” and the baron orders the baroness to be executed? Is the blame on the boatman or the baron?

Aside from the obvious victim-blaming element of this exercise, I think it offers legitimate insights into our notions of justice. Definitely more than two and a half months on the dumb Trolley problem and another month on Socrates running around Athens and wondering “What Is Justice?” — at least this exercise fosters engagement.

But I do like the Trolley memes.


The increase of entropy is THE law of the universe.

What is it?

As described before:

It is the integral of reversible heat transferred into a system over the infinitesimal change in the temperature of system over which that heat is transferred.

In an isolated system entropy always increases or remains static. In a closed system entropy always increases or remains static, unless there is an input of work in which case a decrease in entropy is possible. It should note that entropy can decrease locally, but it must increase the entropy of its surroundings as a natural consequence, such that there is a net positive increase in entropy of the closed system and the surroundings.

Part of the reason I’m being intentionally evasive and technical is that when the topic of entropy comes up, seasoned professionals tend to become pedantic and exacting. One of the biggest “Weeeeell acktchuallllly…” inducing statements is connecting entropy to disorder. I should point out it was Clausius who did this and everyone else is just following suit.

Clausius also made the statement that it is impossible to construct a device which operates on a cycle and produces no other effect than the transfer of heat from a cooler body to a hotter body. This is what we would call the second law of thermodynamics.

There are actually multiple ways to state this law. Lord Kelvin stated that no process is possible whose sole result is the absorption of heat from a reservoir and the conversion of this heat into work.

The short version of all of these is, in an isolated system the entropy of the system must increase or remain constant.

It’s pretty simple, if something is left alone, it will fall apart. This seems like it’s making a bigger statement that it really is, but it is really just an observation. What happens to your room if you don’t clean it? It gets dusty. It tends to thermodynamic equilibrium. “Disorder” increases.

A good way to think of it is if you had two plates next to each other, one hot and one cold, you would expect the hot plate to get cooler and the cool plate to get warmer. Unless you are actively pumping the heat from the cool plate to the warm plate i.e. a refrigerator, which requires work, the cool plate will eventually get warmer.

Suppose I were to put an engine in between the hot plate, which we call a heat source, and the cool plate, which we can call a heat sink. Heat will flow through the heat engine into the heat sink. It must do so because of the the second law of thermodynamics. Why would we do this? To get work, which in the case of the heat engine is necessary to create power. This is (a very, very high level of) how mechanical power generation works.

What if heat flowed from the cold to the warm plate through the engine? Then, we could run that guy forever and never worry about our energy needs! Nice idea, but impossible. Remember Clausius? Ok, but what if we were just really efficient? Could we turn every btu, every joule of heat into work? Alas, no. That violates Kelvin’s statement.

What I just gave you was a quick rundown of “classical thermodynamics”.

One way I think of it is the deviation from perfect symmetry. This is my personal understanding, so it’s probably flawed, but at 0 Kelvin we would expect a crystal lattice to be perfectly translationally symmetric. Entropy is the observation that in a closed system, some part of this crystal will eventually deviate from the rigid lattice.

This is closer to what you might consider a “statistical mechanics understanding of thermodynamics”.

Here’s the mind-blowing part for me: the concept of entropy is everywhere.

I don’t mean just in thermodynamics or in the universe in general. There is a version of entropy in information theory called Shannon entropy. It has been generalized into Renyi Entropy. Entropy shows up in statistical mechanics and in quantum mechanics as von Neumann entropy. Entropy seems to describe time under an Arrow of Time construct. There is an analogue of entropy, and actually all four laws of thermodynamics, in black holes. There are extensions of entropy used to determine the possibility of evolution of life from primordial soups. Entropy in abiogenesis has given rise to the idea of ectropy. There are even attempts at consolidating entropy in economic theories called ‘thermoeconomics’, although that attempt is not without controversy.

This is why entropy is so mindblowing to me. It touches just about every field, and seems to be, upon initial inspection, inherent to any system as it scales. The universality of entropy is important not only on microscopic levels, but seems to be important on macroscopic, systems-level scales too.

It think that’s why there is so much confusion about entropy. Every discipline has their own understanding of entropy relevant to their field. Chemists focus more on the Arrow of Time and disorder sense of entropy because determining when products form and mixing solutions is more relevant than understanding how a piston operates. Both are explained by thermodynamics, but the nuances of the arguments differ and thus shape our understanding in different ways.

Mechanical engineers tend understand the classical presentation best. I know of the fact that entropy is important in statistical mechanics and in information theory, but I don’t understand it to the extent that a physicist or a computer scientist does. But that mathematical understanding of the concept of entropy is helpful because it shows that despite the nuances of each individual field, there is a similar underlying structure to all of these fields of study.

The lesser version: None

No lesser version. It’s just the be-all end-all of everything from what I can tell. I dunno, maybe nihilism?

Why this thought experiment is more interesting

It still just blows my mind how universal it is.

The only thing to add to this quick diatribe on entropy is that I think there is still more to expand on, at least in terms of the theoretical foundations.

For example, there are what are called conservation laws, which state a certain property cannot be created or destroy, only transformed. Most relevantly, the conservation of energy states that, as I just wrote, energy cannot be created or destroyed. Same for matter, same for momentum. But why?

There exists a theorem called Noether’s theorem that shows that conservation laws arise from symmetry. So the conservation of energy comes from the symmetry of an action over time. Momentum and angular momentum are conserved because of actions over translational and rotational symmetries.

Is there a corollary to Noether’s theorem (an anti-Noether’s Theorem?) that shows that non-conserved quantities, like entropy, arise from an asymmetry? What would this anti-Noether charge be? An actions variance over what quantity could yield entropy?

This is pretty general, but I feel that understanding something like that, if true, would be a truly clarifying insight. It might explain why so many of these entropy analogues all have a similar mathematical form (some constant times the natural log of some argument as in s = k*ln(w) or ).

That’s one thing that was very interesting about Noether’s theorem; when I was first exposed to the theorem, my professor said something like “This is the most fundamental thing your basic physics class skipped” and I remember thinking “Well, why did I pay so much money to sit through four years and skip such basic fundamentals until the very end?” But I see why now.

Noether’s theorem is the closest we have to an explanation of a law. With the thermodynamic laws, at least with the second law of thermodynamics, there is a certain amount of faith to be had. We kind of just have to accept that entropy increases and things fall apart. That may be why entropy feels so mysterious to so many of us and has so many hair-splitting, pedantic detractors. Noether’s theorem took so much of that out for energy conservation.

They are our axioms, taken as true and as our baseline assumption.

The other theoretic foundation to extend entropy I believe should be in ecology and anthropology. I’m not an ecologist nor an anthropologist so maybe there is some version of entropy in either field, but I’ve noticed something interesting about the environmentalists from either of those fields.

There are two schools of thought in environmentalism, bright green and dark green. Bright greens tend to believe that most of our environmental problems can be mitigated by developing technologies to counter our growing problems. Electric Cars. Vertical Farms. Solar Panels. That sort of environmentalist.

Dark Greens, on the other hand, are more nihilistic. They tend to believe that no amount of progress will deflect the inevitable decline of our environment. Their view is that the only thing that might mitigate environmental collapse is an extreme act of uncivilization — a return to the stone ages. This is often associated with extremists like the Unabomber, Earth First!, or John Zerzan.

Most environmentalists from non-STEM fields, like anthropology or ecology, tend to fall in either camp. I think this reflects on societies view of technology. The Bright Greens trust STEM-professionals to do right and create technologies that save the world and reflect that trust in technology. Dark Greens are cognizant that most efforts for improved technologies will inevitably still result in environmental collapse.

I think a lack of awareness about entropy explains this dissonance. The environment will fall apart, eventually. Everything falls apart, eventually. Our ecological understanding is still underpinned with an understanding of energy. Trees still need an energy input (solar) to initiate photosynthesis. We know that these fields are still governed by the laws of physics.

But on the bright green side, understanding that there will never be a 100% efficient Carnot engine is not an excuse to not make an engine however. Just because the Coefficent of Performance for a refrigerator never hits one doesn’t mean we should give up on refrigerators. It hasn’t stopped us in the past. That point speaks to developing technologies that decrease human suffering, but are cognizant of physical constraints and environmentally conscious.

My whole point here is that I do firmly believe that the second law of thermodynamics and entropy has a place in ecology and possibly anthropology, but it isn’t made aware at the most basic, undergraduate levels. Introducing these fields early on might help contextualize the hard science aspects of these fields. That way, divisions like that between dark greens and bright greens might ease away.