On luck and correlation of events in life
Luck is a concept that probably exists in all the languages, a concept with elusive definition which we refer to in multiple situations. However, most of the times we do not have a clear idea of either we aim to refer to a sort of external intelligence loading the dice, or to a random event playing for or against us by chance. So, the question to be formulated will be, is luck (in the sense of “something” sorting out events in a positive or negative way) an invention of the human mind, insisting on finding an intelligent purpose behind the chains of events, or it is something else deserving an objective analysis?.
There are two irreconcilable stands on this matter, on the one hand those people thinking that luck has not any scientific evidence, and on the other hand those ignoring any scientific approach and believing that luck is a kind of intelligence, guiding the events in the interest of a specific eventual result. I wonder if there is room for analysis turning our backs on all the existing prejudices on this matter.
In this post I take the challenge to address the concept of luck. To that end I will be proposing firstly a definition, followed by introducing the orthodox science’s stand on the matter. Next I will explain why the abuse of the probabilistic approach is incorrect and finally, I will tell a couple of real stories challenging our common sense.
I will leave for a second post on the matter the introduction of some processes capable of creating real patterns in Nature on the one hand, and other mechanisms tricking us and making us recognise non-existent patterns on the other hand. To conclude I will dare to outline a couple of models aimed at explaining series of correlated events in life (luck).
A DEFINITION OF THE CONCEPT OF LUCK
Luck would not be more than the assessment of the eventual outcome of a chain of events and its subsequent classification as either positive or negative. According to this definition it comes clear that there is an important underlying subjective factor in the way in which we assess a series of events, in terms of being positive or negative for us. In order to get rid, as much as possible, of this subjectivity, I will be addressing only those chains of events clearly correlated in either a positive or negative way. Hence I will be discarding those chains of evens being positive or negative depending on a personal interpretation.
From the previous paragraph we can derive an equivalent definition, a bit more technical: luck would be a chain of events clearly correlated in an either positive or negative way. Let’s make clear here that such events should not be the outcome of a chain of bad or good decisions, leading to the corresponding negative or positive chain of results respectively. This point is key, if we want to analyse luck in an objective way, the objects under study would be those events strongly correlated but not attributable to be the consequence of a series of correlated causes. Generally speaking a linear relationship between a cause and its effect can be established as a simple, and linear, relation where the effect can be easily derived from a cause or input. Therefore, inputs and outputs are linked by means of an easy relationship. Not many things in life are linear, since we are surrounded by a lot of non-linear interactions making outputs not easily anticipated from specific causes. Let’s say that in a non-linear world the relation cause-effect does not exist. To a great extent this is what happens in our complex world, our linear brain keeps analysing (trying) a non-linear world. I would say that only in some specific circumstances our non-linear world comes linear and we can hence use easy rules to anticipate the effects following a cause. A linear brain in a non-linear world is a subject I will be addressing in another post. I only outline here these ideas to support the reasoning thread.
Therefore, the eventual definition for luck would be “a chain of events clearly correlated in an either positive or negative way and not attributable to be the outcome of a series of positive or negative causes respectively”.
WHERE DOES SCIENCE STAND ON LUCK?
Let’s be clear, science does not believe in either luck or in any intelligence ruling our fortune in the background. Science would study a chain of correlated events in a probabilistic way and always will end up stating that “the probability of such chain to happen is not zero, hence the correlation in the chain is a coincidence”. This is the key point, a chain of events (unless one or more of them violate a deterministic physical law as the law of gravity) will never have zero probability.
Even a huge chain of events strongly correlated will keep a probability different from zero. I find this fact quite tricky since while the common sense would be finding this phenomenon certainly suspicious, the probability theory will not be finding anything strange or violating any law.
In my opinion, the probabilistic approach is a sort of blank cheque denning sometimes suspicious facts. A glass just broken apart could, in theory, be pieced together spontaneously by a correlated chain of spontaneous thermal movements. The average time to observe that would be millions of years, but even if we witnessed that tomorrow the process would have non-zero probability.
A REASONABLE DOUBT
Let me play the role of a defence lawyer introducing some arguments for throwing a reasonable doubt on luck guilt.
The risks of applying the probabilistic approach to a process that we do not understand
Generally speaking, it is assumed that a series of events with non-zero probability is not impossible. It makes sense, however the underlying problem is when the probabilistic approach is applied to a non-random process (more technically non-stochastic process). Indeed, probabilistic approach should be applied to those processes with an underlying random dynamics, otherwise it may lead us to totally false conclusions.
Let me use as an example a well-known process determined by the so-called “logistic equation”. The logistic equation is studied in Chaos theory as an equation producing a deterministic process looking random. A deterministic process is that perfectly determined by previous conditions. Based on a given present state, the future evolution is perfectly pre-determined. The logistic equation is x(t+1)=r*x(t)*(1-x(t)) and describes a process in which given a known value of the variable x in time t, we know the next state x(t+1) in time t+1. x is a variable ranging within the interval [0,1] and r a fixed parameter that can be taken from the range [1, 4]. The interesting thing is that for certain values of the parameter r the chain of values looks really random, however it is completely deterministic. This finding was a great breakthrough in the field of dynamical systems (1).
The important thing here is how the probabilistic approach can lead us to false conclusions when applied to non-random underlying phenomena (even if they look random). Next figure shows a chain of values produced by the logistic equation when r=4.
Now, if an observer considered that such chain of events is the outcome of a random process and decided to use the probabilistic approach, he/she would build the probability distribution of the variable X, by studying a chain long enough. The observer would obtain the following probability distribution (more precisely “frequency distribution”).
Now, if that observer took the chain of events from the figure 3 and tried to predict the next value by using the probability distribution in the figure 4, he/she would say that all the numbers in the range [0, 1] have certain probability to show up, being the values x=0.01 and x=1 the most probable with 0.06 probability. The probability distribution would provide all the probabilities associated to the different possible values of X. However, the last value in the chain in the figure 3 is 0.977658, and according to the logistic equation the next value is 0.08737, with probability equal to 1. That is to say, any value different from 0.08737 is impossible, however the probabilistic approach assigns all the range [0, 1] with some probability to happen.
The moral of this simple example is that the application of the probabilistic approach to processes whose underlying principles we do not know, can lead us to assign occurrence probability to events that actually are impossible.
When the events break the common sense but not the probability theory
Let us imagine now the process in the figure 3 as a real random process and the probability distribution in the figure 4 as the real description of the event occurrence. Next imagine that we get the sequence 0.47, 0.47, 0.47, 0.47, 0.47, 0.47, 0.47, 0.47, 0.47, 0.47, 0.47, 0.47, 0.47, 0.47, 0.47, 0.47. The event 0.47 has an associate probability of 0.0017. The calculation of the probability of a sequence of events (when the events are considered independent among them) consists in the multiplication of the associated probability for each event present in the sequence. In this case, the probability of this chain is P=0.0017 * 0.0017 * 0.0017 * 0.0017 * 0.0017 * 0.0017 * 0.0017 * 0.0017 * 0.0017 * 0.0017 * 0.0017 * 0.0017 * 0.0017 * 0.0017 * 0.0017 * 0.0017= 0.00000000000000000000000000000000000000000000486612
Here we have a problem, while the common sense tells us that something strange is happening, the probability theory indicates that this chain has no zero-probability, and as we are not breaking any law, we have a coincidence, not a meaningful occurrence. From the highly correlated chain of events, the common sense tells us that the process stopped being random and that a different dynamics took over. On the other hand, if we get stuck to the probabilistic approach, we will think that we have nothing else than a coincidence.
Which approach should we adopt?. My stand would be to use the probabilistic approach only if we were certain that the nature of the process is random and the process is isolated, I mean there is no chance that any other external condition may affect the evolution of the random process. In all the cases in which I could not ensure the previous conditions I would think of strongly correlated chains as patterns generated by no pure random dynamics. In my opinion it is better to admit that we are not certain of the underlying nature of the process that embracing an incorrect approach, only because it makes us feel that we have an interpretation. In other words, for me it is better to admit our ignorance that to embrace an erroneous idea to keep us away from uncertainty. As I mentioned in the post “Reality is a complex object (I)” human beings do not feel comfortable with uncertainty and tend to lie to themselves.