The Logic of Risk Taking
Nassim Nicholas Taleb
15.6K84

How about some boundaries?

<<Spitznagel and I even started an entire business to help investors eliminate uncle points so they can get the returns of the market. While I retired to do some flaneuring, Mark continued at his Universa relentlessly (and successfully, while all others have failed). Mark and I have been frustrated by economists who, not getting ergodicity, keep saying that worrying about the tails is “irrational”.>>

I have three problems with this:
One- is Taleb advertising Universa? if so, and following the skin in the game line, who’s money is Universa investing? Spitznagel and Taleb’s own money only or of others as well? doesn’t this violates the rule “I do not tell people what to do, but I tell them what I do…” ?!?

Second: Universa’s fund is a privately held one, correct? so “successful” is a relative concept. We do not have any benchmark data, neither we know the relative numbers and I doubt we are going to see them soon. This unfortunately makes it a very bad example to bring forward because it cannot be verified.

Third: “…inter-temporal investments/consumption requires absence of ruin”… again another sentence that does not add up to the rest. 
What does it “ruin” mean? 
… that the investment strategy you propose does instead always assumes “ruin”? 
Then, assuming “ruin” is “democratic” and has affected everybody, to whom exactly would you sell your “black-swan-esche” assets in a market of “ruined” people? and incidentally, in a “ruined” market, wouldn’t also the black swan or Universa’s asset be under the original value at which the “put” or “option” had been bought? Hence is said “asset” really a safeguard against the tail event or only a theoretical one ?

Finally… Yes; the “rationality” approach has its own fallacies, we all agree that worrying about the tails is NOT “irrational”. It has actually its advantages… but also its own limitations, and I do not see pointed out clearly in this article. How is this one (proposed by Taleb), a “superior” model in real life exactly? 
1) First with this approach to risk, I am under the impression that we would end up doing almost nothing because of the fear of a multitude of tail events which might happen or not, in our lifetime or in the lifetime of our children... how do you choose which one are the scariest exactly? The series could be endless.

On this, I think that from an evolutionary perspective, the fact that people make stupid decisions based on historical records, is also part of the system we are all in. And sometimes very smart people makes very stupid things. It’s the tragedy of the human condition the Ancient Greeks so well understood that they developed an art around it.

2) Secondly, I see a broader point which is heavily underplayed in this article; the “system” takes care of itself, regardless of our decisions to have courage, decency or run away. The “system” (whatever this is…), is beyond morality. Even if of all the 7 billion humans living on this planet, actively agree to take the precautionary principle at heart and adhere to it totally as a new religion and a new way of life (even in North Korea), still we could have tail events (both known unknown and unknown unknown), which could wipe us out for good as a specie, or simply endanger our individual life or well being or destroy or damage our economy irreversibly.

In conclusion Taleb is probably right to suggest us to “say no” to tail risks, however we should not fool ourselves that we can beat the odds with that. We can, at most, get some comfort in our impression of control over some events. That’s the essence of Tragedy.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.