AI runs on rationality. Yet, we are the children of serendipity.
AI runs on rationality. At least that’s how it feels.
- Computers will soon be better than radiologists at identifying tumours because the task is very rational: a tumour looks like this, is there one in this picture?
- Computers are great at landing a plane — potentially better than human pilots — because again, the desired outcome is very well defined.
DeepMind’s incredible performance at beating old school Atari games is a poster child example: the outcome of each game is simple and precise (get the highest score). AI excels at this.
But what does it mean for us when everything is driven by rationality? Is this always the best approach? A couple of counter examples come to mind.
Fleming did not set to discover Penicillin.
A bad manipulation followed by a random holiday led to it. In fact, this has happened time and again in science. No rationale here. Quite the contrary: it’s anti-rational to make a bad manipulation in a lab.
Think about biological evolution.
In simple terms:
- a mutation occurs in an individual’s genome
- it gives this individual an “edge” vs non mutated ones
- as they reproduce and transmit their genes, the “new” population grows, potentially at the detriment of the original one.
Nature did not set on getting fishes to walk on the ground when mutations started to lead them this way. It just… happened. In fact, it probably did not make any “rational” sense — but rationality did not mean anything in this context.
Clearly, serendipity and its associated orthogonal outcomes plays its role for the better.
What does it mean for AI?
AI researchers understand this, of course. In fact, evolutionary techniques have been around for a while in the field. The principle is fascinating since it suggests that by creating random mutations in datasets/algorithms, we might stumble on “better” results. In fact, we used this technique extensively at Timista back in the days (genetic algorithms can produce good results for the travelling salesman problem).
But here is the thing though: in typical machine learning fashion, it feels like we assume we “know” what “better” looks like, i.e. we score against a predefined criteria. So yes, we allow “mutations”, but we “discard” them if they don’t yield better results in the existing framework of judgment. And as such, that’s not real serendipity.
Think of a trivial example : let’s say tomorrow our lives are driven by machines, and as such are entirely “rational” (or said differently: “optimised”). Well it means you’ll never stop at the local coffee on your way to your next appointment — surely planing will be perfect and won’t allow an hour of idle time, will it? — and you will not meet this person that will tell you about xyz and become your better half / business partner etc.
And here is the thing: these random encounters are not just “fun” and the “spice of life”. They lead to potentially unexpected and positive outcomes (a new venture, new research…). Something you would have never foreseen, and never optimised for.
I feel we are what we are today as human beings vastly thank to this notion of “serendipity”. How do we make sure that with AI having a growing impact on our life, we don’t annihilate it and its knock-on effects?
If you think this article is interesting, please don’t hesitate to “Recommend” it with the little green heart button below. It really helps and more people will see it. Thanks!