Utility’s Edge

P.W. Anderson was awarded the 1977 Nobel Prize in Physics alongside two colleagues. Their contribution in “fundamental theoretical investigations of the electronic structure of magnetic and disordered systems” eventually paved the way to affordable computer memory.

One might imagine that Anderson, then, as the most elite in a field of elites, held his work above other pursuits. It would be understandable for a Harvard educated particle physicist, who went on to lecture at Cambridge and Princeton, to do so.

Interestingly, however, he published More Is Different in 1972, where he argues against this sort of artificial hierarchy. He opens by stating that reductionist philosophy and scientific thought has led to a communal bias in favor of those who study “fundamental laws”.

The status quo, and enemy of men like Anderson.

There is a reverence around physics — and, in particular, astrophysicists and particle physicists — because they examine the foundation of the universe.

But the fallacy, according to Anderson, is that a reductionist analysis of the world does not naturally lead itself to a constructionist one. He writes, “The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe.”

There’s more.

“In fact, the more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science, much less to those of society.”

We can look at the universe (or, more specifically, our world) in layers. At the microlevel we can examine subatomic particles, many times smaller than an atom. But deep understanding of subatomic particle behavior does not intuitively lead to an understanding of a cell.

Understanding cell behavior does not familiarize us with the workings of organs or organ systems. And, again, organ system function does not infer cognition, let alone culture or language or shared values.

The various levels of complexity require their own science. In Anderson’s view, each science is equally valid.

Long Term Capital Management (LTCM) was a hedge fund founded in 1993. Their trading strategy was called “convergence trading” in that they would invest in two identical assets in different markets (or two materially similar assets) with different prices.

Like good finance students, they bought the undervalued asset (going “long”) and bet against the overvalued asset (going “short”).

For simplicity sake, imagine a bar of gold traded for $1.00 in Los Angeles and $1.10 in New York. LTCM would buy gold in Los Angeles and short gold in New York. They bet that the price of gold would converge around $1.05. (In our hyper-simplified example, trading volumes are roughly equal.)

The investments LTCM bought (fixed income securities and derivatives) were contracts where two parties agreed to transact. Because it was a legal obligation, with clear terms and time horizons, LTCM managers thought they could perfectly value the contracts using advanced mathematical and statistical models.

To do so accurately, Long Term Capital Management relied on superior modeling and insights from the “smartest” possible talent, including MBAs and PhDs from Harvard, MIT, Chicago, and Stanford. They also boasted two Nobel Prize winners.

Robert C. Merton and Myron S. Scholes were awarded the 1997 Nobel Memorial Prize in Economic Sciences “for a new method to determine the value of derivatives”. They were the experts in the field, and they employed sophisticated models to value assets and find price discrepancies.

The returns on investing $1,000 in Long Term Capital Management (LTCM), the Dow Jones Industrial Average (DJIA), and low-risk US Treasury bonds from 1994 to 1998.

The graph above shows various investment returns from 1994 to 1998, one year after Merton and Scholes won their Nobel.

Things, as they often do, started off well. $1,000 invested in the fund grew to $2,000 amazingly fast. It outpaced the general market, and it was just picking up. It charged to $2,500 and $3,000, then $3,500 and $4,000.

LTCM’s fund returned 42.8% in 1995 and 40.8% in 1996. That is unheard of. A great fund outperforms the market, but only by a few points. The S&P 500 may return 6% each year over a period of 5 or 10 years. A top-tier hedge fund might grow at 9–10% per year over that same period.

And then 1998 happened.

Their fund lost 79% of its equity value in three weeks in September 1998. What people — especially those making piles of money, winning awards, and posing on magazine covers — don’t tell you about convergence trading is that your profits are (almost by definition) incredibly small.

Cars are never mispriced 30% from dealership to dealership. Similarly, public financial assets are traded in and out so regularly that markets tend to (1) have very small price discrepancies and (2) quickly converge.

Outperforming the market when exploiting 0.01% differences in price is not easy: You must make the right convergence bet but also time that bet correctly. And even then, the results are not great.

The market can remain irrational longer than you can remain solvent. — John Maynard Keynes (1883–1946)

To make any real money, LTCM’s strategy required debt. The fund started 1998 with an equity value of $5 billion (I’m being generous and rounding up) and had $125 billion in debt, making them leveraged 25-to-1.

For every dollar they invest, they invested $25 of someone else’s money. So when they won, they won bigger; but when they lost, they lost bigger, too. The practical example is real estate, where your family owns 3% (1 / 26) of your home and has 97% (25 / 26) tied up in a mortgage. Sound familiar?

(Why 26? Well, a 25-to-1 debt-to-equity ratio implies there are 26 parts, of which 25 are debt and 1 is equity.)

LTCM believed that the “smartest” people building the most comprehensive models could win. They believed they could account for all possible risks and scenarios.

In my mind, mathematical and statistical models are tools to price expected value. And expected value is helpful, but it’s not the same as intrinsic value. I fundamentally disagree that intrinsic value can be pegged to one number.

Markets get close to pricing intrinsic value because they have a lot of buyers and sellers acting at the same time. Reasonability emerges from this chaos. Assets continually go up and down in value because they reflect what we (collectively) believe the intrinsic value should be.

Any market is a collection of opinions, not facts.

Because, after all, an object is worth what someone else is willing to pay. And, when discussing legal contracts, an agreement is only worth something if the other party can actually hold their end of the bargain.

If you sue me for breach of contract, and I have zero assets, what will you end up with? Sure, put me in jail, but you sure as hell aren’t going to end up with money. And that’s part of the miscalculation by LTCM.

In 1998 there was a large debt crisis in Russia. Russia devalued its currency by printing vast amounts of it and either defaulted on (didn’t pay out) or restructured (set new repayment terms on) its sovereign debt.

Counterparty risk, the risk that someone at the other end of this arm’s length transactions will be unable to fulfill their legal obligation, is real. And it cannot be captured mathematically.

According to a 2010 World Bank report, largely due to the crisis in Russia, LTCM lost $550 million on August 21, 1998. I don’t care how smart you are, losing half a billion in one day hurts. And it hurts more when you lost money doing something that is in your area of study.

Ironically, statistics can help us understand why the models could not account for the Russian economic collapse in late 1998.

John Hendry, author of Ethics and Finance, writes that the 1998 crisis was a 7-sigma event. Seven sigma means an event is seven standard deviations away from the average. It’s so insane that a statistics professor (or a nerd like me) will tell you it’s suppose to happen once every three billion years.

Let’s go back to P.W. Anderson. Deep knowledge in one area can lead to great advances, but that knowledge cannot be transposed into other areas.

Ask a physicist to perform a brain surgery. It’s all just atoms, right?

Fitting the whole world into your one framework is incoherent. And this is not appreciated often enough. As Maslow said, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”

That is the curse of individuals who seek to theorize, rather than experiment, in their field. Applying mathematics and statistics to capital markets and businesses is helpful. Hell, I do it every day.

But tools never replace intuition. Bad finance, bad data science, and bad philosophy are the same: They use one tool to solve multiple problems, even though each problem varies in complexity and scale.

If a business has a 10% chance of selling for $100 million dollars and a 90% chance of failure, the expected value is simple to calculate. It’s $10 million.

We can expect that if we run this experiment enough times we will, on average, gain $10 million. Sounds great. But the more likely scenario is you will live the entirety of your life without getting a single dollar.

The first thought (“On average, this outcome pays me $10 million”) and the second thought (“The more likely scenario is that I will live my entire life without ever getting a dollar”) are very different yet both perfectly true.

In much the same way, statistics tells us that Russia going through all this nonsense will happen once every three billion years. Yet a model can never tell you which day the market will crash: Inputting a series of historic data can not predict the future.

The reality is it happened in 1998. And everyone at Long Term Capital Management was too preoccupied staring at their computers to notice. In finance, more math isn’t necessarily better math. And you need to look past math altogether — into history and politics and sociology and other fields — to understand why that’s the case.

The hammer works great for hitting and removing nails. It’s not so great for surgery, though.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.