Mind crash
The fear portrayed by the smart people today about the dominant role intelligent systems are bound for in our civilisation, is often misread, but is in every bit a challenge to our existence, specifically on our quality of existence rather than the question of our existence
The biggest assets we as humans have ever had was our mind, our ability to think, to dream, to analyse and even do the unimaginable, more importantly the free will to decide what’s best for us basis the choices that face us and even shortlist the choices. It’s the single advantage that we have had that led us to dominate earth in every aspect over other species that existed. And with evolution and learning we have gotten better, at a incredibly slow rate, but we have progressed none the less. With every new generation, we have become smarter, quicker to adapt and better than our old peers, with some outliers, except for the present, where we see behaviour patterns and other indicators never before measured at such an high rate, leading us to an idea of progress that isn’t entirely in our best interest
When analysed, the newest growth patterns, instead of improving the challenges that face our minds, has done the opposite. We see a pattern and high preference for outsourcing decision making tasks which was never seen before. Instead of outsourcing only the labour intensive tasks which was the norm for progress till now, we have started outsourcing decision making tasks as well. It started with learning our behaviour patterns and correlating our actions to multiple factors affecting us to help us make an informed decision, but will eventually end up to automated decision making where we don’t even have to make the decision anymore, the same being made for us by intelligient systems aware of our existence better than us or anything else that ever could. With data capturing & analysis and evolution of intelligent systems at superhuman rates, soon every human on earth would be covered with layer of an intelligent system aware of its various aspects of life, who would be able to predict what one is thinking and wants to do even before the person realises their want themselves, leaving the person to simply experience and not decide
Given a choice probably all of our generations would have preferred such a living system but the count of outliers was bound to not just be marginal but have a significant presence. But looking at indicators for the present these outliers are losing their significance by the hour with more and more people willing to give up control to various aspects of their life to intelligent systems under the pretext of reduced risk, higher efficiency, preference for comfort, inadequate intelligience, etc. And the same arguments will forever hold ground since all of them have missed a pertinent impact of such decision making outsourcing, on our thinking capacity, our mind
Just like any instrument or device, our mind is only as good as we are able to utilise it and with rise in automation and evolution of intelligent systems the number of tasks performed by our mind will keep falling and even the mundane tasks will soon disappear. Although this process of freeing up ones mind is something that should excite us but the situation is entirely new and potentially dangerous with devlutionary consequences but we are progressing towards it at a relatively slow pace with higher learning at each step reducing the probability of a potential crash. There exists a possibility that with such free resources of the mind or complete independence, we could move on to better things but the argument is in itself flawed since we didnt reach this point by means of improving our mind but rather diminishing it one step at a time and by the time we reach a stage where we completely depend on intelligient systems for decision making its likely that we wouldn’t be fit to make decisions anymore
The impacts of automation on our mind with potentially devolutionary impacts, will unbalance any optimisation equation that favours intelligent systems over humans, but the same variable is missing in any argument that's made in support of adoption of artificial systems where by all out preferences for intelligent systems miss its potentially devolutionary impacts. There is no way to predict how our minds will react and how we in turn will behave in such a society wherein there are no decisions to make or even an objective to achieve and with the mind unfit at that point for decision making, even if we do survive beyond its likely to account to anything
