Michael, your article has much good content but I think is missing significant approaches that make it potentially dangerous. The line from your article that most characterizes this is: “By then, who knows how AI will be integrated”.
But you don’t seem to take this statement seriously because you go on to tell us it is going to bad and here are some of the ways. Fairly convincing — except for the total history of the earth and the total history of automation.
Since the Industrial Revolution, predictions have abounded, often with violent protests, that people are/will be done out of jobs. That the net result will be massive unemployment. Yet this has never happened. Not in the early days of the IR and not in the current revolution based in information technology. These predications of disaster have never happened and the prediction that “this time is different” is highly unlikely to be so. Is the automation at a unsurpassed scale? Probably. But maybe not in relative terms. However, the rate of adaptation and distributed knowledge (and distributed personal access to that technology) are also increasing more rapidly. But even if they aren’t, it doesn’t prove that this time will be different.
Emergent evolution is the earth’s history of adaptation in all areas of life (and maybe even before life). So far, none of the “certain long term disasterous conclusions have occurred”. The argument that some are in process doesn’t provide knowledge that they are happening. The best (or worst) is still a “we shall see”. Does this mean we shouldn’t try to prevent or inhibit conditions. No. Just that we should base these on what might turn out to be “what we know that ain’t so”.
I suggest study of emergent evolution from the early 1900’s if not before and of complex adaptive systems as the mechanism that is the “mechanism” by which this emergent evolution actually contributes to life’s adaptive essence.
If I’m still allowed, I’ll publish more on this both on Medium and my coming Patreon account.